Jan 07 03:32:32 crc systemd[1]: Starting Kubernetes Kubelet... Jan 07 03:32:32 crc restorecon[4684]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:32 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 07 03:32:33 crc restorecon[4684]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 07 03:32:33 crc restorecon[4684]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 07 03:32:33 crc kubenswrapper[4980]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 07 03:32:33 crc kubenswrapper[4980]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 07 03:32:33 crc kubenswrapper[4980]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 07 03:32:33 crc kubenswrapper[4980]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 07 03:32:33 crc kubenswrapper[4980]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 07 03:32:33 crc kubenswrapper[4980]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.530825 4980 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536688 4980 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536717 4980 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536730 4980 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536742 4980 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536752 4980 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536766 4980 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536780 4980 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536794 4980 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536806 4980 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536818 4980 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536829 4980 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536838 4980 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536848 4980 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536856 4980 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536864 4980 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536873 4980 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536881 4980 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536889 4980 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536897 4980 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536906 4980 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536915 4980 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536923 4980 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536931 4980 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536940 4980 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536948 4980 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536956 4980 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536964 4980 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536972 4980 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.536996 4980 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537006 4980 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537014 4980 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537022 4980 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537030 4980 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537039 4980 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537047 4980 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537056 4980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537066 4980 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537074 4980 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537083 4980 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537091 4980 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537100 4980 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537108 4980 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537116 4980 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537124 4980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537132 4980 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537141 4980 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537151 4980 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537159 4980 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537167 4980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537175 4980 feature_gate.go:330] unrecognized feature gate: Example Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537183 4980 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537192 4980 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537200 4980 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537208 4980 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537217 4980 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537225 4980 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537235 4980 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537242 4980 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537251 4980 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537259 4980 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537267 4980 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537275 4980 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537283 4980 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537292 4980 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537306 4980 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537316 4980 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537326 4980 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537336 4980 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537344 4980 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537353 4980 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.537363 4980 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.537794 4980 flags.go:64] FLAG: --address="0.0.0.0" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.537821 4980 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.537845 4980 flags.go:64] FLAG: --anonymous-auth="true" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.537861 4980 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.537873 4980 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.537883 4980 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.537897 4980 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.537909 4980 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.537920 4980 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.537930 4980 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.537940 4980 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.537951 4980 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.537961 4980 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.537971 4980 flags.go:64] FLAG: --cgroup-root="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.537980 4980 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.537991 4980 flags.go:64] FLAG: --client-ca-file="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538000 4980 flags.go:64] FLAG: --cloud-config="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538010 4980 flags.go:64] FLAG: --cloud-provider="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538020 4980 flags.go:64] FLAG: --cluster-dns="[]" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538031 4980 flags.go:64] FLAG: --cluster-domain="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538040 4980 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538051 4980 flags.go:64] FLAG: --config-dir="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538061 4980 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538072 4980 flags.go:64] FLAG: --container-log-max-files="5" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538084 4980 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538094 4980 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538104 4980 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538114 4980 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538126 4980 flags.go:64] FLAG: --contention-profiling="false" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538136 4980 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538146 4980 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538156 4980 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538166 4980 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538195 4980 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538205 4980 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538215 4980 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538224 4980 flags.go:64] FLAG: --enable-load-reader="false" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538234 4980 flags.go:64] FLAG: --enable-server="true" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538244 4980 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538257 4980 flags.go:64] FLAG: --event-burst="100" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538268 4980 flags.go:64] FLAG: --event-qps="50" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538278 4980 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538288 4980 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538298 4980 flags.go:64] FLAG: --eviction-hard="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538309 4980 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538319 4980 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538329 4980 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538339 4980 flags.go:64] FLAG: --eviction-soft="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538350 4980 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538360 4980 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538370 4980 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538379 4980 flags.go:64] FLAG: --experimental-mounter-path="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538389 4980 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538399 4980 flags.go:64] FLAG: --fail-swap-on="true" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538408 4980 flags.go:64] FLAG: --feature-gates="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538420 4980 flags.go:64] FLAG: --file-check-frequency="20s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538430 4980 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538440 4980 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538451 4980 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538460 4980 flags.go:64] FLAG: --healthz-port="10248" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538470 4980 flags.go:64] FLAG: --help="false" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538480 4980 flags.go:64] FLAG: --hostname-override="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538490 4980 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538500 4980 flags.go:64] FLAG: --http-check-frequency="20s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538510 4980 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538520 4980 flags.go:64] FLAG: --image-credential-provider-config="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538529 4980 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538539 4980 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538549 4980 flags.go:64] FLAG: --image-service-endpoint="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538586 4980 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538596 4980 flags.go:64] FLAG: --kube-api-burst="100" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538606 4980 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538616 4980 flags.go:64] FLAG: --kube-api-qps="50" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538626 4980 flags.go:64] FLAG: --kube-reserved="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538636 4980 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538645 4980 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538657 4980 flags.go:64] FLAG: --kubelet-cgroups="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538666 4980 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538679 4980 flags.go:64] FLAG: --lock-file="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538689 4980 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538699 4980 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538709 4980 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538725 4980 flags.go:64] FLAG: --log-json-split-stream="false" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538735 4980 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538745 4980 flags.go:64] FLAG: --log-text-split-stream="false" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538755 4980 flags.go:64] FLAG: --logging-format="text" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538764 4980 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538775 4980 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538784 4980 flags.go:64] FLAG: --manifest-url="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538793 4980 flags.go:64] FLAG: --manifest-url-header="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538806 4980 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538816 4980 flags.go:64] FLAG: --max-open-files="1000000" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538827 4980 flags.go:64] FLAG: --max-pods="110" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538839 4980 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538852 4980 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538865 4980 flags.go:64] FLAG: --memory-manager-policy="None" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538877 4980 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538889 4980 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538899 4980 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538909 4980 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538930 4980 flags.go:64] FLAG: --node-status-max-images="50" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538941 4980 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538950 4980 flags.go:64] FLAG: --oom-score-adj="-999" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538960 4980 flags.go:64] FLAG: --pod-cidr="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538969 4980 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538985 4980 flags.go:64] FLAG: --pod-manifest-path="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.538995 4980 flags.go:64] FLAG: --pod-max-pids="-1" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539005 4980 flags.go:64] FLAG: --pods-per-core="0" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539015 4980 flags.go:64] FLAG: --port="10250" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539025 4980 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539035 4980 flags.go:64] FLAG: --provider-id="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539046 4980 flags.go:64] FLAG: --qos-reserved="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539056 4980 flags.go:64] FLAG: --read-only-port="10255" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539066 4980 flags.go:64] FLAG: --register-node="true" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539076 4980 flags.go:64] FLAG: --register-schedulable="true" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539085 4980 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539140 4980 flags.go:64] FLAG: --registry-burst="10" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539150 4980 flags.go:64] FLAG: --registry-qps="5" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539159 4980 flags.go:64] FLAG: --reserved-cpus="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539168 4980 flags.go:64] FLAG: --reserved-memory="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539181 4980 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539191 4980 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539201 4980 flags.go:64] FLAG: --rotate-certificates="false" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539210 4980 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539220 4980 flags.go:64] FLAG: --runonce="false" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539229 4980 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539240 4980 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539251 4980 flags.go:64] FLAG: --seccomp-default="false" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539260 4980 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539270 4980 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539280 4980 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539290 4980 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539301 4980 flags.go:64] FLAG: --storage-driver-password="root" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539310 4980 flags.go:64] FLAG: --storage-driver-secure="false" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539320 4980 flags.go:64] FLAG: --storage-driver-table="stats" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539329 4980 flags.go:64] FLAG: --storage-driver-user="root" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539339 4980 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539349 4980 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539360 4980 flags.go:64] FLAG: --system-cgroups="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539369 4980 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539384 4980 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539394 4980 flags.go:64] FLAG: --tls-cert-file="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539403 4980 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539415 4980 flags.go:64] FLAG: --tls-min-version="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539425 4980 flags.go:64] FLAG: --tls-private-key-file="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539434 4980 flags.go:64] FLAG: --topology-manager-policy="none" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539444 4980 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539454 4980 flags.go:64] FLAG: --topology-manager-scope="container" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539465 4980 flags.go:64] FLAG: --v="2" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539478 4980 flags.go:64] FLAG: --version="false" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539490 4980 flags.go:64] FLAG: --vmodule="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539502 4980 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.539512 4980 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539764 4980 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539777 4980 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539788 4980 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539798 4980 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539807 4980 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539816 4980 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539824 4980 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539833 4980 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539843 4980 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539852 4980 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539862 4980 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539874 4980 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539886 4980 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539897 4980 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539909 4980 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539920 4980 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539929 4980 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539938 4980 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539946 4980 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539954 4980 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539963 4980 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539971 4980 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539980 4980 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539988 4980 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.539997 4980 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540006 4980 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540014 4980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540022 4980 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540030 4980 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540038 4980 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540047 4980 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540059 4980 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540070 4980 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540080 4980 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540089 4980 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540099 4980 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540108 4980 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540117 4980 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540310 4980 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540319 4980 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540327 4980 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540337 4980 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540352 4980 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540360 4980 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540368 4980 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540377 4980 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540386 4980 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540394 4980 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540403 4980 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540411 4980 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540420 4980 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540429 4980 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540437 4980 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540445 4980 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540463 4980 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540471 4980 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540480 4980 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540489 4980 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540497 4980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540505 4980 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540513 4980 feature_gate.go:330] unrecognized feature gate: Example Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540522 4980 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540530 4980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540539 4980 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540547 4980 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540580 4980 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540592 4980 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540604 4980 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540616 4980 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540626 4980 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.540638 4980 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.540962 4980 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.553613 4980 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.553664 4980 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553799 4980 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553814 4980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553825 4980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553834 4980 feature_gate.go:330] unrecognized feature gate: Example Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553842 4980 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553851 4980 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553861 4980 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553869 4980 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553877 4980 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553886 4980 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553894 4980 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553904 4980 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553913 4980 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553921 4980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553928 4980 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553936 4980 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553944 4980 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553952 4980 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553961 4980 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553968 4980 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553976 4980 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553984 4980 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553992 4980 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.553999 4980 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554007 4980 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554016 4980 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554024 4980 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554033 4980 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554040 4980 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554048 4980 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554056 4980 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554064 4980 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554072 4980 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554080 4980 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554092 4980 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554104 4980 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554115 4980 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554124 4980 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554135 4980 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554143 4980 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554150 4980 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554158 4980 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554166 4980 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554174 4980 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554184 4980 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554192 4980 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554200 4980 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554211 4980 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554224 4980 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554234 4980 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554242 4980 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554250 4980 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554259 4980 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554267 4980 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554276 4980 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554284 4980 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554292 4980 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554299 4980 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554307 4980 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554316 4980 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554323 4980 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554331 4980 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554339 4980 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554346 4980 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554354 4980 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554362 4980 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554370 4980 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554377 4980 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554387 4980 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554395 4980 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554404 4980 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.554417 4980 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554690 4980 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554706 4980 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554715 4980 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554723 4980 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554731 4980 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554739 4980 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554747 4980 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554755 4980 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554763 4980 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554772 4980 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554780 4980 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554788 4980 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554797 4980 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554805 4980 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554812 4980 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554820 4980 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554828 4980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554836 4980 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554844 4980 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554852 4980 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554859 4980 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554867 4980 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554874 4980 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554882 4980 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554890 4980 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554898 4980 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554906 4980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554915 4980 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554923 4980 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554930 4980 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554939 4980 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554947 4980 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554955 4980 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554963 4980 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554973 4980 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554981 4980 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554989 4980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.554997 4980 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555005 4980 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555013 4980 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555020 4980 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555028 4980 feature_gate.go:330] unrecognized feature gate: Example Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555038 4980 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555047 4980 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555056 4980 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555064 4980 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555071 4980 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555079 4980 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555087 4980 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555096 4980 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555103 4980 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555113 4980 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555145 4980 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555155 4980 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555166 4980 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555176 4980 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555187 4980 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555195 4980 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555204 4980 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555213 4980 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555221 4980 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555232 4980 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555242 4980 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555251 4980 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555261 4980 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555271 4980 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555280 4980 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555289 4980 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555297 4980 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555305 4980 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.555315 4980 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.555326 4980 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.555639 4980 server.go:940] "Client rotation is on, will bootstrap in background" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.560051 4980 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.560183 4980 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.561057 4980 server.go:997] "Starting client certificate rotation" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.561101 4980 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.561598 4980 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-04 07:20:02.241803506 +0000 UTC Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.561801 4980 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.569349 4980 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 07 03:32:33 crc kubenswrapper[4980]: E0107 03:32:33.572188 4980 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.575756 4980 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.585976 4980 log.go:25] "Validated CRI v1 runtime API" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.606269 4980 log.go:25] "Validated CRI v1 image API" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.608846 4980 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.611750 4980 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-07-03-28-06-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.611799 4980 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.634944 4980 manager.go:217] Machine: {Timestamp:2026-01-07 03:32:33.633425246 +0000 UTC m=+0.199120021 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:9c9b768a-7681-4a73-9b43-d778a3c82c46 BootID:faa7e186-0b6e-43ad-a16a-d507c499b170 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:14:31:f0 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:14:31:f0 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:3d:48:16 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:cd:33:d8 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ec:27:3f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:9a:24:a4 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:12:25:bf:e0:ca:00 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5a:5d:6c:16:95:11 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.635226 4980 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.635379 4980 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.636150 4980 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.636362 4980 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.636404 4980 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.636869 4980 topology_manager.go:138] "Creating topology manager with none policy" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.636893 4980 container_manager_linux.go:303] "Creating device plugin manager" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.637137 4980 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.637170 4980 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.637364 4980 state_mem.go:36] "Initialized new in-memory state store" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.637510 4980 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.638342 4980 kubelet.go:418] "Attempting to sync node with API server" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.638366 4980 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.638386 4980 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.638402 4980 kubelet.go:324] "Adding apiserver pod source" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.638417 4980 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.640468 4980 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.640999 4980 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.641030 4980 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Jan 07 03:32:33 crc kubenswrapper[4980]: E0107 03:32:33.641086 4980 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.641114 4980 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Jan 07 03:32:33 crc kubenswrapper[4980]: E0107 03:32:33.641163 4980 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.641951 4980 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.643232 4980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.643305 4980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.643322 4980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.643337 4980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.643360 4980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.643388 4980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.643403 4980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.643424 4980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.643442 4980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.643458 4980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.643479 4980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.643493 4980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.643840 4980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.644862 4980 server.go:1280] "Started kubelet" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.645607 4980 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.645377 4980 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.645826 4980 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.646950 4980 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 07 03:32:33 crc systemd[1]: Started Kubernetes Kubelet. Jan 07 03:32:33 crc kubenswrapper[4980]: E0107 03:32:33.651114 4980 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.65:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18885566c64bff87 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 03:32:33.644814215 +0000 UTC m=+0.210508960,LastTimestamp:2026-01-07 03:32:33.644814215 +0000 UTC m=+0.210508960,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.652142 4980 server.go:460] "Adding debug handlers to kubelet server" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.653468 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.653495 4980 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 07 03:32:33 crc kubenswrapper[4980]: E0107 03:32:33.653749 4980 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.653770 4980 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.653803 4980 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.653787 4980 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.654324 4980 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Jan 07 03:32:33 crc kubenswrapper[4980]: E0107 03:32:33.654399 4980 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Jan 07 03:32:33 crc kubenswrapper[4980]: E0107 03:32:33.655118 4980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="200ms" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.653611 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 15:23:51.770746456 +0000 UTC Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.660093 4980 factory.go:55] Registering systemd factory Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.660122 4980 factory.go:221] Registration of the systemd container factory successfully Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.662776 4980 factory.go:153] Registering CRI-O factory Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.662805 4980 factory.go:221] Registration of the crio container factory successfully Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.662961 4980 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.663001 4980 factory.go:103] Registering Raw factory Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.663027 4980 manager.go:1196] Started watching for new ooms in manager Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.664947 4980 manager.go:319] Starting recovery of all containers Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.665695 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.665765 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.665794 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.665819 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.665839 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.665858 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.665876 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.665893 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.665913 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666141 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666159 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666178 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666196 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666243 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666269 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666292 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666368 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666387 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666412 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666429 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666470 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666499 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666542 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666584 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666635 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666653 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666683 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666725 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666763 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666789 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666812 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.666854 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667015 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667038 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667060 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667080 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667105 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667123 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667140 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667158 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667226 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667284 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667340 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667368 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667394 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667489 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667614 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667646 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667773 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667796 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667815 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667832 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.667960 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668008 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668053 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668106 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668149 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668177 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668247 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668265 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668283 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668302 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668318 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668335 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668382 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668400 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668416 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668498 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668601 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668628 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668701 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668721 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668765 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668813 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668841 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668865 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668888 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668911 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.668965 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669006 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669100 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669124 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669146 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669168 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669227 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669263 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669296 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669318 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669420 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669446 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669470 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669493 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669516 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669538 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669619 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669695 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669744 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669771 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669829 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669855 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669879 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669897 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.669913 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670003 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670204 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670243 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670262 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670317 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670374 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670402 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670455 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670474 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670509 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670528 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670546 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670590 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670699 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670718 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670735 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670752 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670792 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670825 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670856 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670889 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670920 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.670945 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671050 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671090 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671160 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671187 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671286 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671392 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671423 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671447 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671471 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671495 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671617 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671648 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671672 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671696 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671745 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671770 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671794 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671818 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671845 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671869 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671891 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671911 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.671974 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.672008 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.672030 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.672054 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.672078 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.672100 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.672144 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.672179 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673414 4980 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673455 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673475 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673512 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673541 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673588 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673607 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673624 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673641 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673665 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673689 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673713 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673738 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673757 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673775 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673799 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673822 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673847 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673869 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673886 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673905 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673923 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673940 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673960 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.673981 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674004 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674026 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674053 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674075 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674112 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674134 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674167 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674194 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674225 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674255 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674293 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674329 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674367 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674399 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674433 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674459 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674480 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674513 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674535 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674604 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674643 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674666 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674696 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674723 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.674750 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.675211 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.675238 4980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.675258 4980 reconstruct.go:97] "Volume reconstruction finished" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.675271 4980 reconciler.go:26] "Reconciler: start to sync state" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.703233 4980 manager.go:324] Recovery completed Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.715596 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.720181 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.720242 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.720253 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.721078 4980 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.721109 4980 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.721140 4980 state_mem.go:36] "Initialized new in-memory state store" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.731627 4980 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.732925 4980 policy_none.go:49] "None policy: Start" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.733977 4980 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.734011 4980 state_mem.go:35] "Initializing new in-memory state store" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.734299 4980 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.734350 4980 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.734385 4980 kubelet.go:2335] "Starting kubelet main sync loop" Jan 07 03:32:33 crc kubenswrapper[4980]: E0107 03:32:33.734452 4980 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 07 03:32:33 crc kubenswrapper[4980]: W0107 03:32:33.735465 4980 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Jan 07 03:32:33 crc kubenswrapper[4980]: E0107 03:32:33.735531 4980 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Jan 07 03:32:33 crc kubenswrapper[4980]: E0107 03:32:33.754127 4980 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.807466 4980 manager.go:334] "Starting Device Plugin manager" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.807628 4980 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.807657 4980 server.go:79] "Starting device plugin registration server" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.808401 4980 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.808469 4980 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.808776 4980 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.808918 4980 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.808935 4980 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 07 03:32:33 crc kubenswrapper[4980]: E0107 03:32:33.819650 4980 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.834903 4980 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.834991 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.836185 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.836222 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.836232 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.836368 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.836561 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.836586 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.837448 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.837477 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.837486 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.837606 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.837640 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.837696 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.837705 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.837816 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.837842 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.838613 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.838627 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.838650 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.838669 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.838653 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.838738 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.838749 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.839008 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.839056 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.839433 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.839459 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.839467 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.839659 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.839706 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.839748 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.841116 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.841144 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.841153 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.841847 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.841879 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.842010 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.843283 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.843467 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.843211 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.843754 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.843771 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.847941 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.847965 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.847973 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:33 crc kubenswrapper[4980]: E0107 03:32:33.856132 4980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="400ms" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.883094 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.883150 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.883258 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.883359 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.883422 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.883466 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.883510 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.883637 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.883720 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.883765 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.883809 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.883961 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.884017 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.884049 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.884078 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.909354 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.910228 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.910248 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.910256 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.910286 4980 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 07 03:32:33 crc kubenswrapper[4980]: E0107 03:32:33.910641 4980 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.65:6443: connect: connection refused" node="crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985136 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985202 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985242 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985278 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985313 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985344 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985374 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985406 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985439 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985455 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985507 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985544 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985471 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985634 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985672 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985702 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985732 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985734 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985768 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985777 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985710 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985798 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985839 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985802 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985947 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.985980 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.986018 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.986082 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.986100 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 07 03:32:33 crc kubenswrapper[4980]: I0107 03:32:33.986023 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.111082 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.113385 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.113441 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.113461 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.113495 4980 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 07 03:32:34 crc kubenswrapper[4980]: E0107 03:32:34.113948 4980 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.65:6443: connect: connection refused" node="crc" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.175948 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.190457 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 07 03:32:34 crc kubenswrapper[4980]: W0107 03:32:34.206575 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-356b2b8827719d96a949afab98f60a7a105ac1f00c900338463d1a24627cb4a5 WatchSource:0}: Error finding container 356b2b8827719d96a949afab98f60a7a105ac1f00c900338463d1a24627cb4a5: Status 404 returned error can't find the container with id 356b2b8827719d96a949afab98f60a7a105ac1f00c900338463d1a24627cb4a5 Jan 07 03:32:34 crc kubenswrapper[4980]: W0107 03:32:34.214793 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-cd13d5afe1935f7aadee93b8ec3865e961029dd5854b939ff8d184d47a9f6332 WatchSource:0}: Error finding container cd13d5afe1935f7aadee93b8ec3865e961029dd5854b939ff8d184d47a9f6332: Status 404 returned error can't find the container with id cd13d5afe1935f7aadee93b8ec3865e961029dd5854b939ff8d184d47a9f6332 Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.215397 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.230871 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.233209 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 03:32:34 crc kubenswrapper[4980]: W0107 03:32:34.238950 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e188c0ab35bf0d20eb5a3b5a87d85b61bcde4788f4b0d74b12ce0cd150d62f46 WatchSource:0}: Error finding container e188c0ab35bf0d20eb5a3b5a87d85b61bcde4788f4b0d74b12ce0cd150d62f46: Status 404 returned error can't find the container with id e188c0ab35bf0d20eb5a3b5a87d85b61bcde4788f4b0d74b12ce0cd150d62f46 Jan 07 03:32:34 crc kubenswrapper[4980]: E0107 03:32:34.258125 4980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="800ms" Jan 07 03:32:34 crc kubenswrapper[4980]: W0107 03:32:34.260735 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-0d8761a9e2a6dbf7ae37b3e9f1218733a024bbcd65ae966d82e7ad2b3d0b8e85 WatchSource:0}: Error finding container 0d8761a9e2a6dbf7ae37b3e9f1218733a024bbcd65ae966d82e7ad2b3d0b8e85: Status 404 returned error can't find the container with id 0d8761a9e2a6dbf7ae37b3e9f1218733a024bbcd65ae966d82e7ad2b3d0b8e85 Jan 07 03:32:34 crc kubenswrapper[4980]: W0107 03:32:34.264268 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-69021f779c41274fb0ae8ed13e33c38ebf98fa949e13660fd308e5e493d3ffd5 WatchSource:0}: Error finding container 69021f779c41274fb0ae8ed13e33c38ebf98fa949e13660fd308e5e493d3ffd5: Status 404 returned error can't find the container with id 69021f779c41274fb0ae8ed13e33c38ebf98fa949e13660fd308e5e493d3ffd5 Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.514086 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.515304 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.515344 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.515354 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.515381 4980 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 07 03:32:34 crc kubenswrapper[4980]: E0107 03:32:34.515866 4980 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.65:6443: connect: connection refused" node="crc" Jan 07 03:32:34 crc kubenswrapper[4980]: W0107 03:32:34.537418 4980 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Jan 07 03:32:34 crc kubenswrapper[4980]: E0107 03:32:34.537482 4980 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.647333 4980 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.656593 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 11:03:39.152648272 +0000 UTC Jan 07 03:32:34 crc kubenswrapper[4980]: W0107 03:32:34.717883 4980 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Jan 07 03:32:34 crc kubenswrapper[4980]: E0107 03:32:34.717964 4980 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.738183 4980 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933" exitCode=0 Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.738251 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933"} Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.738345 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cd13d5afe1935f7aadee93b8ec3865e961029dd5854b939ff8d184d47a9f6332"} Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.738432 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.739633 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.739670 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.739681 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.740097 4980 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b5aa770dddd3c34cae9a89f2fbbaf2180eb5806efb970bcb176c9948126c5934" exitCode=0 Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.740161 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b5aa770dddd3c34cae9a89f2fbbaf2180eb5806efb970bcb176c9948126c5934"} Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.740189 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"356b2b8827719d96a949afab98f60a7a105ac1f00c900338463d1a24627cb4a5"} Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.740260 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.741311 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.741340 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.741349 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.742478 4980 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977" exitCode=0 Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.742548 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977"} Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.742681 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"69021f779c41274fb0ae8ed13e33c38ebf98fa949e13660fd308e5e493d3ffd5"} Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.742895 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.744099 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce"} Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.744153 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0d8761a9e2a6dbf7ae37b3e9f1218733a024bbcd65ae966d82e7ad2b3d0b8e85"} Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.744269 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.744297 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.744307 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.746955 4980 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808" exitCode=0 Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.746985 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808"} Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.747003 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e188c0ab35bf0d20eb5a3b5a87d85b61bcde4788f4b0d74b12ce0cd150d62f46"} Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.747081 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.747963 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.747991 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.748002 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.751131 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.751890 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.751935 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:34 crc kubenswrapper[4980]: I0107 03:32:34.751947 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:34 crc kubenswrapper[4980]: W0107 03:32:34.755692 4980 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Jan 07 03:32:34 crc kubenswrapper[4980]: E0107 03:32:34.755797 4980 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Jan 07 03:32:35 crc kubenswrapper[4980]: W0107 03:32:35.007174 4980 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Jan 07 03:32:35 crc kubenswrapper[4980]: E0107 03:32:35.007262 4980 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Jan 07 03:32:35 crc kubenswrapper[4980]: E0107 03:32:35.059427 4980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="1.6s" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.316901 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.318167 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.318205 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.318216 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.318245 4980 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 07 03:32:35 crc kubenswrapper[4980]: E0107 03:32:35.319709 4980 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.65:6443: connect: connection refused" node="crc" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.658765 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:13:20.617281235 +0000 UTC Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.753415 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3e6a45bda4fadd983cfdd44b4bd0d30119c61a359a83e8380e97ae53ef2b4f87"} Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.753462 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9146f4a2fc7846d5688c54a7c7508ef7a9b0982306cf09705764fcc74e3f9597"} Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.753486 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a2a6fd6463030fc325a1d02660ee60568d07812763b17f0afff926c273155620"} Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.753598 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.754766 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.754794 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.754805 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.757119 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7"} Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.757148 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd"} Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.757161 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a"} Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.757233 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.757801 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.757824 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.757835 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.758644 4980 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.761372 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff"} Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.761400 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7"} Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.761413 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35"} Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.761427 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883"} Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.764913 4980 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da" exitCode=0 Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.764976 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da"} Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.765108 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.765959 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.765986 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.765998 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.767065 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"523d0471ddbf76f60815470ee971e8b259c45255a4e7fd7844ca9b258b4bcbee"} Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.767133 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.767718 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.767743 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:35 crc kubenswrapper[4980]: I0107 03:32:35.767754 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.659823 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:21:02.347422751 +0000 UTC Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.688310 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.774549 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0"} Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.774654 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.776201 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.776254 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.776272 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.779376 4980 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c" exitCode=0 Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.779491 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c"} Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.780128 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.781848 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.782113 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.782258 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.782305 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.782325 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.785451 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.785519 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.785576 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.785669 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.785797 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.785830 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.920764 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.922938 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.922999 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.923019 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:36 crc kubenswrapper[4980]: I0107 03:32:36.923059 4980 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 07 03:32:37 crc kubenswrapper[4980]: I0107 03:32:37.660817 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 11:36:56.141176514 +0000 UTC Jan 07 03:32:37 crc kubenswrapper[4980]: I0107 03:32:37.785953 4980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 07 03:32:37 crc kubenswrapper[4980]: I0107 03:32:37.786024 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:37 crc kubenswrapper[4980]: I0107 03:32:37.786696 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0"} Jan 07 03:32:37 crc kubenswrapper[4980]: I0107 03:32:37.786746 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4"} Jan 07 03:32:37 crc kubenswrapper[4980]: I0107 03:32:37.786767 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d"} Jan 07 03:32:37 crc kubenswrapper[4980]: I0107 03:32:37.790888 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:37 crc kubenswrapper[4980]: I0107 03:32:37.790931 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:37 crc kubenswrapper[4980]: I0107 03:32:37.790947 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:38 crc kubenswrapper[4980]: I0107 03:32:38.661637 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:25:03.169968702 +0000 UTC Jan 07 03:32:38 crc kubenswrapper[4980]: I0107 03:32:38.795131 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0"} Jan 07 03:32:38 crc kubenswrapper[4980]: I0107 03:32:38.795207 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe"} Jan 07 03:32:38 crc kubenswrapper[4980]: I0107 03:32:38.795775 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:38 crc kubenswrapper[4980]: I0107 03:32:38.799357 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:38 crc kubenswrapper[4980]: I0107 03:32:38.799411 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:38 crc kubenswrapper[4980]: I0107 03:32:38.799432 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:39 crc kubenswrapper[4980]: I0107 03:32:39.151244 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:32:39 crc kubenswrapper[4980]: I0107 03:32:39.151404 4980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 07 03:32:39 crc kubenswrapper[4980]: I0107 03:32:39.151450 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:39 crc kubenswrapper[4980]: I0107 03:32:39.152953 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:39 crc kubenswrapper[4980]: I0107 03:32:39.153003 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:39 crc kubenswrapper[4980]: I0107 03:32:39.153021 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:39 crc kubenswrapper[4980]: I0107 03:32:39.662166 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:25:47.866944929 +0000 UTC Jan 07 03:32:39 crc kubenswrapper[4980]: I0107 03:32:39.798447 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:39 crc kubenswrapper[4980]: I0107 03:32:39.799910 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:39 crc kubenswrapper[4980]: I0107 03:32:39.799962 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:39 crc kubenswrapper[4980]: I0107 03:32:39.799980 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:40 crc kubenswrapper[4980]: I0107 03:32:40.170825 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:32:40 crc kubenswrapper[4980]: I0107 03:32:40.171094 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:40 crc kubenswrapper[4980]: I0107 03:32:40.175881 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:40 crc kubenswrapper[4980]: I0107 03:32:40.176016 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:40 crc kubenswrapper[4980]: I0107 03:32:40.176485 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:40 crc kubenswrapper[4980]: I0107 03:32:40.354822 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:32:40 crc kubenswrapper[4980]: I0107 03:32:40.662664 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 13:32:07.405196336 +0000 UTC Jan 07 03:32:40 crc kubenswrapper[4980]: I0107 03:32:40.800919 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:40 crc kubenswrapper[4980]: I0107 03:32:40.802244 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:40 crc kubenswrapper[4980]: I0107 03:32:40.802308 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:40 crc kubenswrapper[4980]: I0107 03:32:40.802333 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:41 crc kubenswrapper[4980]: I0107 03:32:41.338968 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:32:41 crc kubenswrapper[4980]: I0107 03:32:41.339163 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:41 crc kubenswrapper[4980]: I0107 03:32:41.340496 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:41 crc kubenswrapper[4980]: I0107 03:32:41.340535 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:41 crc kubenswrapper[4980]: I0107 03:32:41.340547 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:41 crc kubenswrapper[4980]: I0107 03:32:41.346200 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:32:41 crc kubenswrapper[4980]: I0107 03:32:41.663029 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:18:30.673055055 +0000 UTC Jan 07 03:32:41 crc kubenswrapper[4980]: I0107 03:32:41.803974 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:41 crc kubenswrapper[4980]: I0107 03:32:41.805454 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:41 crc kubenswrapper[4980]: I0107 03:32:41.805511 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:41 crc kubenswrapper[4980]: I0107 03:32:41.805578 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:42 crc kubenswrapper[4980]: I0107 03:32:42.118874 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:32:42 crc kubenswrapper[4980]: I0107 03:32:42.664121 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 08:31:46.196264309 +0000 UTC Jan 07 03:32:42 crc kubenswrapper[4980]: I0107 03:32:42.806890 4980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 07 03:32:42 crc kubenswrapper[4980]: I0107 03:32:42.806958 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:42 crc kubenswrapper[4980]: I0107 03:32:42.808221 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:42 crc kubenswrapper[4980]: I0107 03:32:42.808281 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:42 crc kubenswrapper[4980]: I0107 03:32:42.808299 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:43 crc kubenswrapper[4980]: I0107 03:32:43.445138 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 07 03:32:43 crc kubenswrapper[4980]: I0107 03:32:43.445451 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:43 crc kubenswrapper[4980]: I0107 03:32:43.446863 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:43 crc kubenswrapper[4980]: I0107 03:32:43.446926 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:43 crc kubenswrapper[4980]: I0107 03:32:43.446943 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:43 crc kubenswrapper[4980]: I0107 03:32:43.664971 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 11:44:29.453447366 +0000 UTC Jan 07 03:32:43 crc kubenswrapper[4980]: E0107 03:32:43.819758 4980 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 07 03:32:44 crc kubenswrapper[4980]: I0107 03:32:44.105127 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:32:44 crc kubenswrapper[4980]: I0107 03:32:44.105383 4980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 07 03:32:44 crc kubenswrapper[4980]: I0107 03:32:44.105436 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:44 crc kubenswrapper[4980]: I0107 03:32:44.108052 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:44 crc kubenswrapper[4980]: I0107 03:32:44.108103 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:44 crc kubenswrapper[4980]: I0107 03:32:44.108122 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:44 crc kubenswrapper[4980]: I0107 03:32:44.112245 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:32:44 crc kubenswrapper[4980]: I0107 03:32:44.260763 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 07 03:32:44 crc kubenswrapper[4980]: I0107 03:32:44.260989 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:44 crc kubenswrapper[4980]: I0107 03:32:44.262807 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:44 crc kubenswrapper[4980]: I0107 03:32:44.262859 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:44 crc kubenswrapper[4980]: I0107 03:32:44.262882 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:44 crc kubenswrapper[4980]: I0107 03:32:44.665375 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:45:51.09434993 +0000 UTC Jan 07 03:32:44 crc kubenswrapper[4980]: I0107 03:32:44.813503 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:44 crc kubenswrapper[4980]: I0107 03:32:44.815025 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:44 crc kubenswrapper[4980]: I0107 03:32:44.815084 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:44 crc kubenswrapper[4980]: I0107 03:32:44.815102 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:45 crc kubenswrapper[4980]: I0107 03:32:45.120755 4980 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 07 03:32:45 crc kubenswrapper[4980]: I0107 03:32:45.121449 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 07 03:32:45 crc kubenswrapper[4980]: I0107 03:32:45.647468 4980 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 07 03:32:45 crc kubenswrapper[4980]: I0107 03:32:45.648550 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:32:45 crc kubenswrapper[4980]: I0107 03:32:45.666463 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:19:37.214645937 +0000 UTC Jan 07 03:32:45 crc kubenswrapper[4980]: E0107 03:32:45.760672 4980 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 07 03:32:45 crc kubenswrapper[4980]: I0107 03:32:45.816667 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:45 crc kubenswrapper[4980]: I0107 03:32:45.817994 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:45 crc kubenswrapper[4980]: I0107 03:32:45.818059 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:45 crc kubenswrapper[4980]: I0107 03:32:45.818079 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:46 crc kubenswrapper[4980]: E0107 03:32:46.660772 4980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 07 03:32:46 crc kubenswrapper[4980]: I0107 03:32:46.667013 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:16:17.123695456 +0000 UTC Jan 07 03:32:46 crc kubenswrapper[4980]: W0107 03:32:46.840861 4980 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 07 03:32:46 crc kubenswrapper[4980]: I0107 03:32:46.841000 4980 trace.go:236] Trace[438725379]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (07-Jan-2026 03:32:36.839) (total time: 10001ms): Jan 07 03:32:46 crc kubenswrapper[4980]: Trace[438725379]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (03:32:46.840) Jan 07 03:32:46 crc kubenswrapper[4980]: Trace[438725379]: [10.001539113s] [10.001539113s] END Jan 07 03:32:46 crc kubenswrapper[4980]: E0107 03:32:46.841033 4980 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 07 03:32:46 crc kubenswrapper[4980]: E0107 03:32:46.924962 4980 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 07 03:32:47 crc kubenswrapper[4980]: I0107 03:32:47.022709 4980 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 07 03:32:47 crc kubenswrapper[4980]: I0107 03:32:47.022789 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 07 03:32:47 crc kubenswrapper[4980]: I0107 03:32:47.036814 4980 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 07 03:32:47 crc kubenswrapper[4980]: I0107 03:32:47.036878 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 07 03:32:47 crc kubenswrapper[4980]: I0107 03:32:47.667632 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 18:12:50.770158233 +0000 UTC Jan 07 03:32:48 crc kubenswrapper[4980]: I0107 03:32:48.668106 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:29:05.417358498 +0000 UTC Jan 07 03:32:49 crc kubenswrapper[4980]: I0107 03:32:49.668985 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:16:37.525633169 +0000 UTC Jan 07 03:32:49 crc kubenswrapper[4980]: I0107 03:32:49.669041 4980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 97h43m47.856597812s for next certificate rotation Jan 07 03:32:50 crc kubenswrapper[4980]: I0107 03:32:50.010239 4980 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 07 03:32:50 crc kubenswrapper[4980]: I0107 03:32:50.029209 4980 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 07 03:32:50 crc kubenswrapper[4980]: I0107 03:32:50.126459 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:50 crc kubenswrapper[4980]: I0107 03:32:50.128651 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:50 crc kubenswrapper[4980]: I0107 03:32:50.128716 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:50 crc kubenswrapper[4980]: I0107 03:32:50.128735 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:50 crc kubenswrapper[4980]: I0107 03:32:50.128782 4980 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 07 03:32:50 crc kubenswrapper[4980]: E0107 03:32:50.134756 4980 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 07 03:32:50 crc kubenswrapper[4980]: I0107 03:32:50.364114 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:32:50 crc kubenswrapper[4980]: I0107 03:32:50.364474 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:50 crc kubenswrapper[4980]: I0107 03:32:50.366747 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:50 crc kubenswrapper[4980]: I0107 03:32:50.366822 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:50 crc kubenswrapper[4980]: I0107 03:32:50.366840 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:50 crc kubenswrapper[4980]: I0107 03:32:50.371499 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:32:50 crc kubenswrapper[4980]: I0107 03:32:50.831149 4980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 07 03:32:50 crc kubenswrapper[4980]: I0107 03:32:50.831225 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:50 crc kubenswrapper[4980]: I0107 03:32:50.832769 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:50 crc kubenswrapper[4980]: I0107 03:32:50.832865 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:50 crc kubenswrapper[4980]: I0107 03:32:50.832885 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:51 crc kubenswrapper[4980]: I0107 03:32:51.381306 4980 csr.go:261] certificate signing request csr-lkzqc is approved, waiting to be issued Jan 07 03:32:51 crc kubenswrapper[4980]: I0107 03:32:51.392683 4980 csr.go:257] certificate signing request csr-lkzqc is issued Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.017108 4980 trace.go:236] Trace[1987768956]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (07-Jan-2026 03:32:37.602) (total time: 14414ms): Jan 07 03:32:52 crc kubenswrapper[4980]: Trace[1987768956]: ---"Objects listed" error: 14414ms (03:32:52.016) Jan 07 03:32:52 crc kubenswrapper[4980]: Trace[1987768956]: [14.414691515s] [14.414691515s] END Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.017159 4980 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.017594 4980 trace.go:236] Trace[429278999]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (07-Jan-2026 03:32:37.704) (total time: 14313ms): Jan 07 03:32:52 crc kubenswrapper[4980]: Trace[429278999]: ---"Objects listed" error: 14313ms (03:32:52.017) Jan 07 03:32:52 crc kubenswrapper[4980]: Trace[429278999]: [14.313471292s] [14.313471292s] END Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.017629 4980 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.017806 4980 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.024651 4980 trace.go:236] Trace[1607139363]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (07-Jan-2026 03:32:37.327) (total time: 14696ms): Jan 07 03:32:52 crc kubenswrapper[4980]: Trace[1607139363]: ---"Objects listed" error: 14696ms (03:32:52.024) Jan 07 03:32:52 crc kubenswrapper[4980]: Trace[1607139363]: [14.696993246s] [14.696993246s] END Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.024693 4980 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.093138 4980 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47478->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.093214 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47478->192.168.126.11:17697: read: connection reset by peer" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.093256 4980 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49282->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.093342 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49282->192.168.126.11:17697: read: connection reset by peer" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.093858 4980 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.093946 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.394764 4980 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-07 03:27:51 +0000 UTC, rotation deadline is 2026-10-22 23:07:46.61616951 +0000 UTC Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.394834 4980 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6931h34m54.22134058s for next certificate rotation Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.493381 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.497663 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.646214 4980 apiserver.go:52] "Watching apiserver" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.648873 4980 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.649293 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.649834 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.649870 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.649872 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.650125 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.650118 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 07 03:32:52 crc kubenswrapper[4980]: E0107 03:32:52.650120 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:32:52 crc kubenswrapper[4980]: E0107 03:32:52.650196 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.650381 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:32:52 crc kubenswrapper[4980]: E0107 03:32:52.650615 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.653540 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.653618 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.653753 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.653857 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.653913 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.653983 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.654022 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.654697 4980 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.656036 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.656285 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.674176 4980 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.687021 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.709204 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.720217 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.722652 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.722752 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.722789 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.722820 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.722850 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.722877 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.722906 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.722935 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.722964 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.722990 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723018 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723043 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723069 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723072 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723095 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723127 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723153 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723179 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723174 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723207 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723236 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723265 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723301 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723328 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723355 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723380 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723407 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723436 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723462 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723486 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723510 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723534 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723584 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723614 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723644 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723669 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723698 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723724 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723756 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723783 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723810 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723851 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723876 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723901 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723924 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723952 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723978 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724000 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724023 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724049 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724074 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724098 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724123 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724147 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724172 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724197 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724224 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724246 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724278 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724302 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724329 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724355 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724380 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724408 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724436 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724462 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724488 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724514 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724538 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724589 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724614 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724642 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724677 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724702 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724734 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724760 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724787 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724812 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724840 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724866 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724902 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725056 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725087 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725116 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725141 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725166 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725196 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725224 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725250 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725275 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725312 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725347 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725378 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725404 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725432 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725465 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725492 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725579 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725607 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725631 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725666 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725696 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725722 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723266 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725746 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725761 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723390 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725773 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725859 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725898 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725928 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725958 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725988 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726019 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726048 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726075 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726102 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726128 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726155 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726180 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726206 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726232 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726260 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726289 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726318 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726378 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726402 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726511 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726537 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726591 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726616 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726640 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726675 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726700 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726724 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726753 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726784 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726808 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726835 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726858 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726884 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726915 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726942 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726968 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726995 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727027 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727062 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727090 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727117 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727142 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727168 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727196 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727221 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727245 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727272 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727296 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727323 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727347 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727370 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727398 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727421 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727451 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727477 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727503 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727526 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727627 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727686 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727716 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727748 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727785 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727931 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727969 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728000 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728099 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728135 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728168 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728200 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728238 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728265 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728297 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728330 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728367 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728397 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728428 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728461 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728489 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728518 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728544 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728592 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728685 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728725 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728754 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728785 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728938 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728976 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.729044 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.729076 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.729106 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.729135 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.729164 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.729230 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.729322 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.729352 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.730833 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.730884 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.730920 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.730951 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.730985 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731018 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731284 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731325 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731356 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731387 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731416 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731488 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731515 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731533 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731549 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731584 4980 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.723978 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724092 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724158 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724149 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724166 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724315 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724353 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724487 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724507 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724533 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724708 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724765 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724795 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724869 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.724948 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725027 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725075 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725434 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725437 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725722 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.725721 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726069 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726246 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726618 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.726903 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727147 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727181 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.727289 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728303 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728696 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.728787 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.729341 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.729548 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.729581 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.729680 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.729745 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.729754 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.730067 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.730162 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.730269 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.730766 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.730878 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731029 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731381 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731460 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731071 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731637 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: E0107 03:32:52.731669 4980 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 03:32:52 crc kubenswrapper[4980]: E0107 03:32:52.737227 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-07 03:32:53.237170793 +0000 UTC m=+19.802865538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.737495 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.738259 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.739146 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.739472 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.739696 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.739837 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.740012 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.740101 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.740146 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.740883 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.740989 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.741334 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.741528 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.741639 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.741710 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.741789 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.741957 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.741981 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.742294 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.742660 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.742602 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731818 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731856 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.742792 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.742973 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.743090 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.743251 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.743631 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.743692 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.743814 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.744070 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.744158 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.732137 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.732188 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.732175 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.732485 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.732569 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.732947 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.745152 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.745154 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.733293 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.733485 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.733655 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.733670 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.733681 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.733922 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.733942 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.734135 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.734198 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.734511 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.734529 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.734870 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.735294 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.735493 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.735883 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.736090 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.736222 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.736496 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.736508 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: E0107 03:32:52.744307 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:32:53.2442665 +0000 UTC m=+19.809961415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.745550 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.745808 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.745998 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.746006 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.744779 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.744807 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.744737 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.745815 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.746221 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.747391 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.747651 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.747884 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.748029 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.748046 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.733064 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.748435 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.731808 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.748940 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.749108 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: E0107 03:32:52.749135 4980 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 03:32:52 crc kubenswrapper[4980]: E0107 03:32:52.749204 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-07 03:32:53.249187382 +0000 UTC m=+19.814882127 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.749915 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.749943 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.744346 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.750089 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.749928 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.750328 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.750374 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.750511 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.750627 4980 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.750710 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.750924 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.752176 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.752301 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.755096 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.758530 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.759157 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.759170 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: E0107 03:32:52.759305 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.759330 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 07 03:32:52 crc kubenswrapper[4980]: E0107 03:32:52.759347 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 03:32:52 crc kubenswrapper[4980]: E0107 03:32:52.759363 4980 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:32:52 crc kubenswrapper[4980]: E0107 03:32:52.759420 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-07 03:32:53.259403715 +0000 UTC m=+19.825098460 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.760749 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.760980 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.761047 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.762662 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.762795 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.763402 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.763356 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.765456 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.767241 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.767462 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.767821 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.767926 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.768081 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.768439 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.768900 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.769135 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.769252 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.769319 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.769793 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.769819 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.770341 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.770429 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.770588 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.772011 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.775654 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.775689 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.775714 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.775921 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.778125 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.778479 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.779067 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.780277 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.780519 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.780758 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.780756 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.781060 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.781104 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.781132 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.781153 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.781055 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: E0107 03:32:52.781202 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 03:32:52 crc kubenswrapper[4980]: E0107 03:32:52.781219 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 03:32:52 crc kubenswrapper[4980]: E0107 03:32:52.781231 4980 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:32:52 crc kubenswrapper[4980]: E0107 03:32:52.781279 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-07 03:32:53.281262485 +0000 UTC m=+19.846957220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.782304 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.782725 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.782753 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.782836 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.785992 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.786248 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.787490 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.787589 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.787679 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.787706 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.787778 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.787872 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.788565 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.789168 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.789289 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.789389 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.799988 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.807608 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.809838 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.812292 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.832970 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.833013 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.833584 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.833616 4980 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.833629 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.833640 4980 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.835667 4980 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.835787 4980 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.835874 4980 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.835945 4980 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.836032 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.836107 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.836179 4980 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.836245 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.836326 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.836389 4980 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.836457 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.836525 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.836602 4980 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.836679 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.836752 4980 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.836826 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.837025 4980 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.837110 4980 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.838909 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.839012 4980 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.839090 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.837433 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.839124 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.839174 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.839744 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.839784 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.839809 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.839831 4980 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.839867 4980 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.839896 4980 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.839922 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.839945 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.839980 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840008 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840035 4980 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840064 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840092 4980 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840117 4980 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840137 4980 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840163 4980 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840186 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840207 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840228 4980 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840273 4980 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840325 4980 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840346 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840373 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840396 4980 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840418 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840451 4980 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840479 4980 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840500 4980 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840521 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840544 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840610 4980 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840636 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840658 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840679 4980 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840705 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840727 4980 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840749 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840775 4980 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840797 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840819 4980 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840839 4980 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840953 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.840977 4980 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841092 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841153 4980 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841178 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841196 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841224 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841236 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841250 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841268 4980 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841279 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841306 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841317 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841329 4980 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841338 4980 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841348 4980 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841383 4980 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841394 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841403 4980 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841413 4980 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841424 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841433 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841460 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841472 4980 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841484 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841493 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841505 4980 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841531 4980 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841542 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841576 4980 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841586 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841597 4980 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841614 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841624 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841633 4980 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841667 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841676 4980 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841687 4980 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841695 4980 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.841814 4980 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845750 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845785 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845800 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845812 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845823 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845835 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845847 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845857 4980 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845867 4980 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845876 4980 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845885 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845896 4980 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845905 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845914 4980 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845938 4980 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845947 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845957 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845966 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845976 4980 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845985 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.845994 4980 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846003 4980 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846012 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846021 4980 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846029 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846039 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846051 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846060 4980 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846068 4980 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846078 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846087 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846096 4980 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846109 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846119 4980 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846129 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846139 4980 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846149 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846161 4980 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846172 4980 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846183 4980 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846194 4980 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846206 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846217 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846227 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846237 4980 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846247 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846258 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846269 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846291 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846301 4980 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846310 4980 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846321 4980 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846333 4980 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846342 4980 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846352 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846362 4980 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846372 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846464 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846476 4980 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846487 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846496 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846533 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846545 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846564 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846574 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846583 4980 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846593 4980 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846601 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846610 4980 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846620 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846629 4980 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846638 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846648 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846658 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846675 4980 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846684 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846693 4980 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846704 4980 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846713 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846721 4980 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846729 4980 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.846737 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.849178 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.851534 4980 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0" exitCode=255 Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.851625 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0"} Jan 07 03:32:52 crc kubenswrapper[4980]: E0107 03:32:52.861812 4980 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.872213 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.872727 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.872894 4980 scope.go:117] "RemoveContainer" containerID="7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.884061 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.906806 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.925292 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.941214 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.964658 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.968312 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.973725 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.984120 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 07 03:32:52 crc kubenswrapper[4980]: I0107 03:32:52.984209 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.258841 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:32:53 crc kubenswrapper[4980]: E0107 03:32:53.259157 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:32:54.259126456 +0000 UTC m=+20.824821191 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.259385 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.259415 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.259439 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:32:53 crc kubenswrapper[4980]: E0107 03:32:53.259595 4980 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 03:32:53 crc kubenswrapper[4980]: E0107 03:32:53.259648 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-07 03:32:54.259638331 +0000 UTC m=+20.825333066 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 03:32:53 crc kubenswrapper[4980]: E0107 03:32:53.259604 4980 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 03:32:53 crc kubenswrapper[4980]: E0107 03:32:53.259685 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 03:32:53 crc kubenswrapper[4980]: E0107 03:32:53.259697 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-07 03:32:54.259689363 +0000 UTC m=+20.825384098 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 03:32:53 crc kubenswrapper[4980]: E0107 03:32:53.259703 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 03:32:53 crc kubenswrapper[4980]: E0107 03:32:53.259720 4980 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:32:53 crc kubenswrapper[4980]: E0107 03:32:53.259767 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-07 03:32:54.259747834 +0000 UTC m=+20.825442569 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.360654 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:32:53 crc kubenswrapper[4980]: E0107 03:32:53.360913 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 03:32:53 crc kubenswrapper[4980]: E0107 03:32:53.360961 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 03:32:53 crc kubenswrapper[4980]: E0107 03:32:53.360977 4980 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:32:53 crc kubenswrapper[4980]: E0107 03:32:53.361054 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-07 03:32:54.361031492 +0000 UTC m=+20.926726217 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.457601 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-nv5s5"] Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.457950 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nv5s5" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.460094 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.462221 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.462531 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.475040 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.493119 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.510850 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.536667 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.550103 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.561610 4980 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 07 03:32:53 crc kubenswrapper[4980]: W0107 03:32:53.561826 4980 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 07 03:32:53 crc kubenswrapper[4980]: W0107 03:32:53.561836 4980 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 07 03:32:53 crc kubenswrapper[4980]: W0107 03:32:53.561881 4980 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 07 03:32:53 crc kubenswrapper[4980]: W0107 03:32:53.561877 4980 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 07 03:32:53 crc kubenswrapper[4980]: W0107 03:32:53.561918 4980 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 07 03:32:53 crc kubenswrapper[4980]: W0107 03:32:53.561941 4980 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 07 03:32:53 crc kubenswrapper[4980]: W0107 03:32:53.561972 4980 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 07 03:32:53 crc kubenswrapper[4980]: W0107 03:32:53.562000 4980 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 07 03:32:53 crc kubenswrapper[4980]: W0107 03:32:53.562025 4980 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 07 03:32:53 crc kubenswrapper[4980]: W0107 03:32:53.562031 4980 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.561966 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf/status\": read tcp 38.102.83.65:34978->38.102.83.65:6443: use of closed network connection" Jan 07 03:32:53 crc kubenswrapper[4980]: W0107 03:32:53.562050 4980 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 07 03:32:53 crc kubenswrapper[4980]: W0107 03:32:53.562061 4980 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 07 03:32:53 crc kubenswrapper[4980]: W0107 03:32:53.562103 4980 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.562034 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e314feea-3256-447b-8f15-50ffcefd4d38-hosts-file\") pod \"node-resolver-nv5s5\" (UID: \"e314feea-3256-447b-8f15-50ffcefd4d38\") " pod="openshift-dns/node-resolver-nv5s5" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.562216 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4mz4\" (UniqueName: \"kubernetes.io/projected/e314feea-3256-447b-8f15-50ffcefd4d38-kube-api-access-k4mz4\") pod \"node-resolver-nv5s5\" (UID: \"e314feea-3256-447b-8f15-50ffcefd4d38\") " pod="openshift-dns/node-resolver-nv5s5" Jan 07 03:32:53 crc kubenswrapper[4980]: E0107 03:32:53.562128 4980 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/events\": read tcp 38.102.83.65:34978->38.102.83.65:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-controller-manager-crc.1888556701e7e3f2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 03:32:34.644886514 +0000 UTC m=+1.210581259,LastTimestamp:2026-01-07 03:32:34.644886514 +0000 UTC m=+1.210581259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.602002 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.615959 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.633220 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.662873 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e314feea-3256-447b-8f15-50ffcefd4d38-hosts-file\") pod \"node-resolver-nv5s5\" (UID: \"e314feea-3256-447b-8f15-50ffcefd4d38\") " pod="openshift-dns/node-resolver-nv5s5" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.662936 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4mz4\" (UniqueName: \"kubernetes.io/projected/e314feea-3256-447b-8f15-50ffcefd4d38-kube-api-access-k4mz4\") pod \"node-resolver-nv5s5\" (UID: \"e314feea-3256-447b-8f15-50ffcefd4d38\") " pod="openshift-dns/node-resolver-nv5s5" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.663045 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e314feea-3256-447b-8f15-50ffcefd4d38-hosts-file\") pod \"node-resolver-nv5s5\" (UID: \"e314feea-3256-447b-8f15-50ffcefd4d38\") " pod="openshift-dns/node-resolver-nv5s5" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.740449 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.741015 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.741769 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.742534 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.743312 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.743884 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.744465 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.746053 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.746981 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.747464 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.747980 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.749354 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.749894 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.750754 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.751241 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.752854 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.753390 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.753763 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.754707 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.755273 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.755740 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.756655 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.757065 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.758028 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.758439 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.759405 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.760020 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.760455 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.761387 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.761832 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.762604 4980 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.762702 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.764252 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.765916 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.766482 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.766928 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.768393 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.769760 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.770241 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.771321 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.772016 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.772192 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4mz4\" (UniqueName: \"kubernetes.io/projected/e314feea-3256-447b-8f15-50ffcefd4d38-kube-api-access-k4mz4\") pod \"node-resolver-nv5s5\" (UID: \"e314feea-3256-447b-8f15-50ffcefd4d38\") " pod="openshift-dns/node-resolver-nv5s5" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.772495 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.773448 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.775100 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.776100 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.776706 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.777352 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.777919 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.779031 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.779529 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.780087 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.780697 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.781337 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.782010 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.782575 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.796170 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.809381 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.821382 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.836540 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.839035 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9ct5r"] Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.839454 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9ct5r" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.840467 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rwpf2"] Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.841044 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.842241 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.842363 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.842437 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.842455 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.842537 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.842623 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.844747 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.846269 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hzlt6"] Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.846818 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.849061 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.849247 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.849503 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.849739 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.854878 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.855685 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5"} Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.855733 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260"} Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.855747 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2c959cdbb2592a426495494f2b7a303dbd8abf559f761c605479492f2039bd81"} Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.857542 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d84efdb33a4a894eb565488085a0b09cff030d3c1e76f7f5c73b95e3162a5e04"} Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.858602 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411"} Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.858662 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"691cbddea9a455775b0a8925affdc54c51e637168971f4f0c2e5871c998e65fa"} Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.859082 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.861481 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.864581 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d"} Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.878117 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.892338 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.905024 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.915395 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.927062 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.938943 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.948889 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.962569 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965077 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-hostroot\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965129 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-etc-kubernetes\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965150 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-cnibin\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965166 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-host-run-netns\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965183 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-host-var-lib-cni-multus\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965201 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-host-run-multus-certs\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965217 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4-proxy-tls\") pod \"machine-config-daemon-hzlt6\" (UID: \"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965261 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-system-cni-dir\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965281 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-multus-socket-dir-parent\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965298 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4-mcd-auth-proxy-config\") pod \"machine-config-daemon-hzlt6\" (UID: \"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965315 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-multus-conf-dir\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965332 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03d0f597-1e90-409f-8345-b641cb7342ea-cni-binary-copy\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965348 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/03d0f597-1e90-409f-8345-b641cb7342ea-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965370 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4-rootfs\") pod \"machine-config-daemon-hzlt6\" (UID: \"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965393 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hqxb\" (UniqueName: \"kubernetes.io/projected/ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4-kube-api-access-8hqxb\") pod \"machine-config-daemon-hzlt6\" (UID: \"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965421 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03d0f597-1e90-409f-8345-b641cb7342ea-os-release\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965488 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03d0f597-1e90-409f-8345-b641cb7342ea-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965509 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3b3e552e-9608-4577-86c3-5f7573ef22f6-cni-binary-copy\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965527 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-host-run-k8s-cni-cncf-io\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965567 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-host-var-lib-kubelet\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965595 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3b3e552e-9608-4577-86c3-5f7573ef22f6-multus-daemon-config\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965612 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03d0f597-1e90-409f-8345-b641cb7342ea-cnibin\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.965629 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03d0f597-1e90-409f-8345-b641cb7342ea-system-cni-dir\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.967272 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d6wv\" (UniqueName: \"kubernetes.io/projected/03d0f597-1e90-409f-8345-b641cb7342ea-kube-api-access-2d6wv\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.967571 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9htm\" (UniqueName: \"kubernetes.io/projected/3b3e552e-9608-4577-86c3-5f7573ef22f6-kube-api-access-j9htm\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.967613 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-multus-cni-dir\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.967631 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-host-var-lib-cni-bin\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.967688 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-os-release\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.978946 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:53 crc kubenswrapper[4980]: I0107 03:32:53.989644 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.003206 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.016955 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.031734 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.043695 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.061568 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069139 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-hostroot\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069179 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-etc-kubernetes\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069199 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-host-run-netns\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069217 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-host-var-lib-cni-multus\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069239 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-host-run-multus-certs\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069258 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-hostroot\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069263 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4-proxy-tls\") pod \"machine-config-daemon-hzlt6\" (UID: \"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069349 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-host-run-netns\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069406 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-cnibin\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069419 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-host-var-lib-cni-multus\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069442 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-system-cni-dir\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069533 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-cnibin\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069444 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-host-run-multus-certs\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069444 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-etc-kubernetes\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069512 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-system-cni-dir\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069601 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-multus-socket-dir-parent\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069632 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-multus-conf-dir\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069647 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-multus-socket-dir-parent\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069652 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03d0f597-1e90-409f-8345-b641cb7342ea-cni-binary-copy\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069707 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-multus-conf-dir\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069714 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4-mcd-auth-proxy-config\") pod \"machine-config-daemon-hzlt6\" (UID: \"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069763 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4-rootfs\") pod \"machine-config-daemon-hzlt6\" (UID: \"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069793 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hqxb\" (UniqueName: \"kubernetes.io/projected/ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4-kube-api-access-8hqxb\") pod \"machine-config-daemon-hzlt6\" (UID: \"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069837 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03d0f597-1e90-409f-8345-b641cb7342ea-os-release\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069857 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/03d0f597-1e90-409f-8345-b641cb7342ea-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069910 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03d0f597-1e90-409f-8345-b641cb7342ea-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069928 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-host-run-k8s-cni-cncf-io\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069945 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-host-var-lib-kubelet\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.069987 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3b3e552e-9608-4577-86c3-5f7573ef22f6-multus-daemon-config\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.070040 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03d0f597-1e90-409f-8345-b641cb7342ea-cnibin\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.070064 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3b3e552e-9608-4577-86c3-5f7573ef22f6-cni-binary-copy\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.070092 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03d0f597-1e90-409f-8345-b641cb7342ea-system-cni-dir\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.070120 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d6wv\" (UniqueName: \"kubernetes.io/projected/03d0f597-1e90-409f-8345-b641cb7342ea-kube-api-access-2d6wv\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.070141 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9htm\" (UniqueName: \"kubernetes.io/projected/3b3e552e-9608-4577-86c3-5f7573ef22f6-kube-api-access-j9htm\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.070160 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-multus-cni-dir\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.070178 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-host-var-lib-cni-bin\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.070199 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-os-release\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.070440 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03d0f597-1e90-409f-8345-b641cb7342ea-cni-binary-copy\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.070464 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-os-release\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.070507 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4-rootfs\") pod \"machine-config-daemon-hzlt6\" (UID: \"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.070593 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4-mcd-auth-proxy-config\") pod \"machine-config-daemon-hzlt6\" (UID: \"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.070741 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03d0f597-1e90-409f-8345-b641cb7342ea-cnibin\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.070793 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-host-run-k8s-cni-cncf-io\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.070820 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-host-var-lib-kubelet\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.070874 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-multus-cni-dir\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.070894 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b3e552e-9608-4577-86c3-5f7573ef22f6-host-var-lib-cni-bin\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.071241 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03d0f597-1e90-409f-8345-b641cb7342ea-system-cni-dir\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.071295 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03d0f597-1e90-409f-8345-b641cb7342ea-os-release\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.071415 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3b3e552e-9608-4577-86c3-5f7573ef22f6-cni-binary-copy\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.071524 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nv5s5" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.071730 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3b3e552e-9608-4577-86c3-5f7573ef22f6-multus-daemon-config\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.072052 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/03d0f597-1e90-409f-8345-b641cb7342ea-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.072156 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03d0f597-1e90-409f-8345-b641cb7342ea-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.073974 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4-proxy-tls\") pod \"machine-config-daemon-hzlt6\" (UID: \"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.087476 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9htm\" (UniqueName: \"kubernetes.io/projected/3b3e552e-9608-4577-86c3-5f7573ef22f6-kube-api-access-j9htm\") pod \"multus-9ct5r\" (UID: \"3b3e552e-9608-4577-86c3-5f7573ef22f6\") " pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: W0107 03:32:54.089735 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode314feea_3256_447b_8f15_50ffcefd4d38.slice/crio-f0c6d17520f93840d800d55dcd2b8a61c9772e6df34c8868972917106a114a64 WatchSource:0}: Error finding container f0c6d17520f93840d800d55dcd2b8a61c9772e6df34c8868972917106a114a64: Status 404 returned error can't find the container with id f0c6d17520f93840d800d55dcd2b8a61c9772e6df34c8868972917106a114a64 Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.094450 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hqxb\" (UniqueName: \"kubernetes.io/projected/ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4-kube-api-access-8hqxb\") pod \"machine-config-daemon-hzlt6\" (UID: \"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.111542 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d6wv\" (UniqueName: \"kubernetes.io/projected/03d0f597-1e90-409f-8345-b641cb7342ea-kube-api-access-2d6wv\") pod \"multus-additional-cni-plugins-rwpf2\" (UID: \"03d0f597-1e90-409f-8345-b641cb7342ea\") " pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.152464 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9ct5r" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.159587 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.170295 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:32:54 crc kubenswrapper[4980]: W0107 03:32:54.171309 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b3e552e_9608_4577_86c3_5f7573ef22f6.slice/crio-38f9e26741123c849d7f32cc0195ff71a3eef90f6d26a6230acca266f1e645a5 WatchSource:0}: Error finding container 38f9e26741123c849d7f32cc0195ff71a3eef90f6d26a6230acca266f1e645a5: Status 404 returned error can't find the container with id 38f9e26741123c849d7f32cc0195ff71a3eef90f6d26a6230acca266f1e645a5 Jan 07 03:32:54 crc kubenswrapper[4980]: W0107 03:32:54.174144 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03d0f597_1e90_409f_8345_b641cb7342ea.slice/crio-812c25a3944658453e47224d6767bad211c06ac515f42af1fcfcde8d01d4bf36 WatchSource:0}: Error finding container 812c25a3944658453e47224d6767bad211c06ac515f42af1fcfcde8d01d4bf36: Status 404 returned error can't find the container with id 812c25a3944658453e47224d6767bad211c06ac515f42af1fcfcde8d01d4bf36 Jan 07 03:32:54 crc kubenswrapper[4980]: W0107 03:32:54.182085 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea1f90e9_fae8_436d_a7fa_5bff36e1c2a4.slice/crio-799f4bb45cba49678739f4ba51d9729f295a8471f5f1f4eb70ba224ddacb1fc7 WatchSource:0}: Error finding container 799f4bb45cba49678739f4ba51d9729f295a8471f5f1f4eb70ba224ddacb1fc7: Status 404 returned error can't find the container with id 799f4bb45cba49678739f4ba51d9729f295a8471f5f1f4eb70ba224ddacb1fc7 Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.235070 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5n7sj"] Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.236134 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.237671 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.238419 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.239396 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.239431 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.239450 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.239438 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.241070 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.261717 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.272064 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.272213 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.272248 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.272273 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:32:54 crc kubenswrapper[4980]: E0107 03:32:54.272444 4980 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 03:32:54 crc kubenswrapper[4980]: E0107 03:32:54.272499 4980 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 03:32:54 crc kubenswrapper[4980]: E0107 03:32:54.272547 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:32:56.272505555 +0000 UTC m=+22.838200290 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:32:54 crc kubenswrapper[4980]: E0107 03:32:54.272611 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-07 03:32:56.272602148 +0000 UTC m=+22.838296883 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 03:32:54 crc kubenswrapper[4980]: E0107 03:32:54.272638 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 03:32:54 crc kubenswrapper[4980]: E0107 03:32:54.272657 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 03:32:54 crc kubenswrapper[4980]: E0107 03:32:54.272671 4980 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:32:54 crc kubenswrapper[4980]: E0107 03:32:54.272715 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-07 03:32:56.272698771 +0000 UTC m=+22.838393696 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:32:54 crc kubenswrapper[4980]: E0107 03:32:54.276944 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-07 03:32:56.27691958 +0000 UTC m=+22.842614315 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.294996 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.299326 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.315349 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.319398 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.328090 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.346314 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.346810 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.366211 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.372984 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-systemd-units\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.373039 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-var-lib-openvswitch\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.373092 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-log-socket\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.373118 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzxmn\" (UniqueName: \"kubernetes.io/projected/6c962a95-c8ed-4d65-810e-1da967416c06-kube-api-access-xzxmn\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.373160 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-node-log\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.373181 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-slash\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.373231 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.373259 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-etc-openvswitch\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.373306 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6c962a95-c8ed-4d65-810e-1da967416c06-ovn-node-metrics-cert\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.373332 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-run-netns\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.373399 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-run-ovn\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.373421 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-run-ovn-kubernetes\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.373441 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6c962a95-c8ed-4d65-810e-1da967416c06-ovnkube-config\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.373484 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-run-openvswitch\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.373504 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-cni-bin\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.373526 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-run-systemd\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.373596 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6c962a95-c8ed-4d65-810e-1da967416c06-env-overrides\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.373641 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6c962a95-c8ed-4d65-810e-1da967416c06-ovnkube-script-lib\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.373771 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-kubelet\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: E0107 03:32:54.373966 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 03:32:54 crc kubenswrapper[4980]: E0107 03:32:54.374045 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.374047 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: E0107 03:32:54.374066 4980 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.374095 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-cni-netd\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: E0107 03:32:54.374329 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-07 03:32:56.374299167 +0000 UTC m=+22.939993902 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.387059 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.433461 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.467670 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475280 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-systemd-units\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475332 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-var-lib-openvswitch\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475354 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-log-socket\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475405 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzxmn\" (UniqueName: \"kubernetes.io/projected/6c962a95-c8ed-4d65-810e-1da967416c06-kube-api-access-xzxmn\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475434 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-node-log\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475455 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-slash\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475491 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-etc-openvswitch\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475511 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6c962a95-c8ed-4d65-810e-1da967416c06-ovn-node-metrics-cert\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475533 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-run-netns\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475572 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-run-ovn\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475590 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-run-ovn-kubernetes\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475609 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6c962a95-c8ed-4d65-810e-1da967416c06-ovnkube-config\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475632 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-run-openvswitch\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475653 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-cni-bin\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475672 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-run-systemd\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475704 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6c962a95-c8ed-4d65-810e-1da967416c06-env-overrides\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475726 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6c962a95-c8ed-4d65-810e-1da967416c06-ovnkube-script-lib\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475763 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-kubelet\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475790 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475813 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-cni-netd\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475890 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-cni-netd\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475940 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-systemd-units\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475966 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-var-lib-openvswitch\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.475991 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-log-socket\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.476506 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-kubelet\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.476608 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-run-openvswitch\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.476620 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-run-systemd\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.476642 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-run-ovn\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.476649 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.476712 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-node-log\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.476698 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-slash\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.476736 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-run-netns\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.476751 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-run-ovn-kubernetes\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.476771 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-etc-openvswitch\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.476873 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.477214 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6c962a95-c8ed-4d65-810e-1da967416c06-ovnkube-config\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.477450 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-cni-bin\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.478444 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6c962a95-c8ed-4d65-810e-1da967416c06-ovnkube-script-lib\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.479052 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6c962a95-c8ed-4d65-810e-1da967416c06-env-overrides\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.485038 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6c962a95-c8ed-4d65-810e-1da967416c06-ovn-node-metrics-cert\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.504017 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.511811 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzxmn\" (UniqueName: \"kubernetes.io/projected/6c962a95-c8ed-4d65-810e-1da967416c06-kube-api-access-xzxmn\") pod \"ovnkube-node-5n7sj\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.537510 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.551427 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.551940 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.556404 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: W0107 03:32:54.563289 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c962a95_c8ed_4d65_810e_1da967416c06.slice/crio-65e117d339419b5fcebfd45e8551dd0f29f30441331a6e21b5c52ff1ec1ea01e WatchSource:0}: Error finding container 65e117d339419b5fcebfd45e8551dd0f29f30441331a6e21b5c52ff1ec1ea01e: Status 404 returned error can't find the container with id 65e117d339419b5fcebfd45e8551dd0f29f30441331a6e21b5c52ff1ec1ea01e Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.583136 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.601924 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.624586 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.632294 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.632766 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.636703 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.655921 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.672980 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.684333 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.684743 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.727246 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.735477 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.735512 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:32:54 crc kubenswrapper[4980]: E0107 03:32:54.735677 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:32:54 crc kubenswrapper[4980]: E0107 03:32:54.735860 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.735942 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:32:54 crc kubenswrapper[4980]: E0107 03:32:54.736069 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.740623 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.759071 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.772988 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.792151 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.807997 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.819186 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.831546 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.847633 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.857771 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.869952 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nv5s5" event={"ID":"e314feea-3256-447b-8f15-50ffcefd4d38","Type":"ContainerStarted","Data":"6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd"} Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.870030 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nv5s5" event={"ID":"e314feea-3256-447b-8f15-50ffcefd4d38","Type":"ContainerStarted","Data":"f0c6d17520f93840d800d55dcd2b8a61c9772e6df34c8868972917106a114a64"} Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.871663 4980 generic.go:334] "Generic (PLEG): container finished" podID="6c962a95-c8ed-4d65-810e-1da967416c06" containerID="10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86" exitCode=0 Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.871763 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerDied","Data":"10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86"} Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.871834 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerStarted","Data":"65e117d339419b5fcebfd45e8551dd0f29f30441331a6e21b5c52ff1ec1ea01e"} Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.874246 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46"} Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.874322 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a"} Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.874339 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"799f4bb45cba49678739f4ba51d9729f295a8471f5f1f4eb70ba224ddacb1fc7"} Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.882577 4980 generic.go:334] "Generic (PLEG): container finished" podID="03d0f597-1e90-409f-8345-b641cb7342ea" containerID="c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86" exitCode=0 Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.882602 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" event={"ID":"03d0f597-1e90-409f-8345-b641cb7342ea","Type":"ContainerDied","Data":"c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86"} Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.882709 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" event={"ID":"03d0f597-1e90-409f-8345-b641cb7342ea","Type":"ContainerStarted","Data":"812c25a3944658453e47224d6767bad211c06ac515f42af1fcfcde8d01d4bf36"} Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.884443 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9ct5r" event={"ID":"3b3e552e-9608-4577-86c3-5f7573ef22f6","Type":"ContainerStarted","Data":"5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024"} Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.884524 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9ct5r" event={"ID":"3b3e552e-9608-4577-86c3-5f7573ef22f6","Type":"ContainerStarted","Data":"38f9e26741123c849d7f32cc0195ff71a3eef90f6d26a6230acca266f1e645a5"} Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.885241 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.891500 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.914103 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.929392 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.952231 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.967405 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:54 crc kubenswrapper[4980]: I0107 03:32:54.987775 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.013859 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.025709 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.028278 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.041955 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.059949 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.068436 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.077449 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.109265 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.137500 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.157677 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.167749 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.223842 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.262862 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.298747 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.336072 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.380288 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.426423 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.464435 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.502987 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.540469 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.578698 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.617832 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.657629 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.700658 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.751932 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.791201 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.836624 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.891267 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerStarted","Data":"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4"} Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.891352 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerStarted","Data":"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983"} Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.891386 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerStarted","Data":"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3"} Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.891415 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerStarted","Data":"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10"} Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.891441 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerStarted","Data":"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788"} Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.896879 4980 generic.go:334] "Generic (PLEG): container finished" podID="03d0f597-1e90-409f-8345-b641cb7342ea" containerID="99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67" exitCode=0 Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.896992 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" event={"ID":"03d0f597-1e90-409f-8345-b641cb7342ea","Type":"ContainerDied","Data":"99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67"} Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.921973 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.941604 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.966430 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:55 crc kubenswrapper[4980]: I0107 03:32:55.981636 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:55Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.019273 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.060629 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.106715 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.145476 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.183440 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.219616 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.261735 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.296376 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.297801 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.297982 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.298007 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:33:00.297978733 +0000 UTC m=+26.863673468 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.298053 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.298092 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.298094 4980 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.298166 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:00.298144978 +0000 UTC m=+26.863839753 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.298246 4980 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.298249 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.298274 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.298286 4980 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.298292 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:00.298284932 +0000 UTC m=+26.863979667 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.298333 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:00.298316823 +0000 UTC m=+26.864011558 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.334233 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.379320 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.398776 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.398932 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.398959 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.398983 4980 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.399040 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:00.399022692 +0000 UTC m=+26.964717437 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.535605 4980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.538167 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.538221 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.538241 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.538356 4980 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.545283 4980 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.545678 4980 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.546705 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.546746 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.546758 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.546773 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.546784 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:56Z","lastTransitionTime":"2026-01-07T03:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.564229 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.567578 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.567612 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.567624 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.567640 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.567652 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:56Z","lastTransitionTime":"2026-01-07T03:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.581255 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.585285 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.585312 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.585321 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.585334 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.585344 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:56Z","lastTransitionTime":"2026-01-07T03:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.597325 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.600911 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.600945 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.600959 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.600977 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.600991 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:56Z","lastTransitionTime":"2026-01-07T03:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.614398 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.617758 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.617794 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.617806 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.617821 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.617834 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:56Z","lastTransitionTime":"2026-01-07T03:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.630262 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.630411 4980 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.632261 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.632300 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.632312 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.632327 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.632338 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:56Z","lastTransitionTime":"2026-01-07T03:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.734070 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.734103 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.734111 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.734122 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.734131 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:56Z","lastTransitionTime":"2026-01-07T03:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.735459 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.735547 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.735597 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.735648 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.735646 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:32:56 crc kubenswrapper[4980]: E0107 03:32:56.735710 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.836734 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.836774 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.836783 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.836798 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.836808 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:56Z","lastTransitionTime":"2026-01-07T03:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.905475 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerStarted","Data":"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8"} Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.907195 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161"} Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.909706 4980 generic.go:334] "Generic (PLEG): container finished" podID="03d0f597-1e90-409f-8345-b641cb7342ea" containerID="878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d" exitCode=0 Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.909751 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" event={"ID":"03d0f597-1e90-409f-8345-b641cb7342ea","Type":"ContainerDied","Data":"878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d"} Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.931992 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.938587 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.938610 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.938617 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.938631 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.938642 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:56Z","lastTransitionTime":"2026-01-07T03:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.950236 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.965614 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.976750 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:56 crc kubenswrapper[4980]: I0107 03:32:56.990935 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:56Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.006824 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.026773 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.041088 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.041141 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.041159 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.041183 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.041202 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:57Z","lastTransitionTime":"2026-01-07T03:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.043156 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.065058 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.083897 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.096788 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.110093 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.128928 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.141464 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.143290 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.143345 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.143365 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.143389 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.143407 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:57Z","lastTransitionTime":"2026-01-07T03:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.157731 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.175394 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.188971 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.198905 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.213938 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.228841 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.245797 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.245864 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.245896 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.245923 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.245946 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:57Z","lastTransitionTime":"2026-01-07T03:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.258593 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.302430 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.347698 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.348440 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.348483 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.348496 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.348515 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.348527 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:57Z","lastTransitionTime":"2026-01-07T03:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.376586 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.417882 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.451901 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.451938 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.451951 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.451969 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.451981 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:57Z","lastTransitionTime":"2026-01-07T03:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.472734 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.502530 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.542144 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.555142 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.555183 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.555199 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.555224 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.555241 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:57Z","lastTransitionTime":"2026-01-07T03:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.658413 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.658465 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.658482 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.658506 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.658527 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:57Z","lastTransitionTime":"2026-01-07T03:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.761384 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.761697 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.761856 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.762004 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.762122 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:57Z","lastTransitionTime":"2026-01-07T03:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.865343 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.865590 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.865733 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.865894 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.866037 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:57Z","lastTransitionTime":"2026-01-07T03:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.919197 4980 generic.go:334] "Generic (PLEG): container finished" podID="03d0f597-1e90-409f-8345-b641cb7342ea" containerID="7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8" exitCode=0 Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.919301 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" event={"ID":"03d0f597-1e90-409f-8345-b641cb7342ea","Type":"ContainerDied","Data":"7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8"} Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.952482 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.969070 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.969104 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.969114 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.969129 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.969140 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:57Z","lastTransitionTime":"2026-01-07T03:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:57 crc kubenswrapper[4980]: I0107 03:32:57.985205 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:57Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.008463 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.040767 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.061722 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.074960 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.075020 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.075040 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.075064 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.075082 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:58Z","lastTransitionTime":"2026-01-07T03:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.080954 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.108964 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.133688 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.151899 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.169339 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.178409 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.178453 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.178470 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.178495 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.178512 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:58Z","lastTransitionTime":"2026-01-07T03:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.189105 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.208208 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.225369 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.245620 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.281846 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.281902 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.281919 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.281941 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.281957 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:58Z","lastTransitionTime":"2026-01-07T03:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.385313 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.385370 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.385388 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.385413 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.385430 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:58Z","lastTransitionTime":"2026-01-07T03:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.488891 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.488970 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.488991 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.489019 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.489038 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:58Z","lastTransitionTime":"2026-01-07T03:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.593294 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.593349 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.593366 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.593389 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.593409 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:58Z","lastTransitionTime":"2026-01-07T03:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.697186 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.697261 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.697281 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.697309 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.697330 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:58Z","lastTransitionTime":"2026-01-07T03:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.735096 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.735154 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.735232 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:32:58 crc kubenswrapper[4980]: E0107 03:32:58.735293 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:32:58 crc kubenswrapper[4980]: E0107 03:32:58.735476 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:32:58 crc kubenswrapper[4980]: E0107 03:32:58.735672 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.800850 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.800912 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.800928 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.800952 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.800970 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:58Z","lastTransitionTime":"2026-01-07T03:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.904913 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.904978 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.904996 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.905023 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.905046 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:58Z","lastTransitionTime":"2026-01-07T03:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.929859 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerStarted","Data":"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067"} Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.934936 4980 generic.go:334] "Generic (PLEG): container finished" podID="03d0f597-1e90-409f-8345-b641cb7342ea" containerID="ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6" exitCode=0 Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.935012 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" event={"ID":"03d0f597-1e90-409f-8345-b641cb7342ea","Type":"ContainerDied","Data":"ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6"} Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.971198 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:58 crc kubenswrapper[4980]: I0107 03:32:58.991255 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.009990 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.010076 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.010102 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.010139 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.010164 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:59Z","lastTransitionTime":"2026-01-07T03:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.026288 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:59Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.045469 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:59Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.070936 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:59Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.088822 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:59Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.109707 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:59Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.112934 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.112990 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.113008 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.113036 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.113053 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:59Z","lastTransitionTime":"2026-01-07T03:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.133327 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:59Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.151415 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:59Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.169681 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:59Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.192793 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:59Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.210209 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:59Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.215234 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.215360 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.215450 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.215544 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.215669 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:59Z","lastTransitionTime":"2026-01-07T03:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.228188 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:59Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.244631 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:59Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.319029 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.319094 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.319114 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.319144 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.319165 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:59Z","lastTransitionTime":"2026-01-07T03:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.424103 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.424167 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.424190 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.424217 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.424238 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:59Z","lastTransitionTime":"2026-01-07T03:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.527637 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.527693 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.527704 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.527718 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.527729 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:59Z","lastTransitionTime":"2026-01-07T03:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.631217 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.631264 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.631272 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.631286 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.631294 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:59Z","lastTransitionTime":"2026-01-07T03:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.734945 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.735017 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.735038 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.735068 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.735088 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:59Z","lastTransitionTime":"2026-01-07T03:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.838380 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.838440 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.838458 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.838482 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.838504 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:59Z","lastTransitionTime":"2026-01-07T03:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.945000 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.945056 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.945073 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.945096 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.945112 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:32:59Z","lastTransitionTime":"2026-01-07T03:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.949843 4980 generic.go:334] "Generic (PLEG): container finished" podID="03d0f597-1e90-409f-8345-b641cb7342ea" containerID="ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d" exitCode=0 Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.949893 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" event={"ID":"03d0f597-1e90-409f-8345-b641cb7342ea","Type":"ContainerDied","Data":"ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d"} Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.972626 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:59Z is after 2025-08-24T17:21:41Z" Jan 07 03:32:59 crc kubenswrapper[4980]: I0107 03:32:59.993315 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:32:59Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.016402 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.040457 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.047787 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.047866 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.047885 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.047929 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.047951 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:00Z","lastTransitionTime":"2026-01-07T03:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.059120 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.075949 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.109827 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.125267 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.147905 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.152432 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.152477 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.152496 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.152522 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.152540 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:00Z","lastTransitionTime":"2026-01-07T03:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.171909 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.191751 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.206290 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.226644 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.249296 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.254656 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.254710 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.254722 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.254741 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.254754 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:00Z","lastTransitionTime":"2026-01-07T03:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.339212 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.339347 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.339388 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:00 crc kubenswrapper[4980]: E0107 03:33:00.339438 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:33:08.339410057 +0000 UTC m=+34.905104832 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.339533 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:00 crc kubenswrapper[4980]: E0107 03:33:00.339550 4980 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 03:33:00 crc kubenswrapper[4980]: E0107 03:33:00.339641 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:08.339623003 +0000 UTC m=+34.905317778 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 03:33:00 crc kubenswrapper[4980]: E0107 03:33:00.339732 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 03:33:00 crc kubenswrapper[4980]: E0107 03:33:00.339826 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 03:33:00 crc kubenswrapper[4980]: E0107 03:33:00.339742 4980 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 03:33:00 crc kubenswrapper[4980]: E0107 03:33:00.339965 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:08.339936833 +0000 UTC m=+34.905631608 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 03:33:00 crc kubenswrapper[4980]: E0107 03:33:00.339850 4980 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:33:00 crc kubenswrapper[4980]: E0107 03:33:00.340066 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:08.340050196 +0000 UTC m=+34.905744971 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.365963 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.366049 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.366074 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.366106 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.366127 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:00Z","lastTransitionTime":"2026-01-07T03:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.423920 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-9pk7v"] Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.424304 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9pk7v" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.427307 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.427766 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.428175 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.430774 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.440600 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.440669 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: E0107 03:33:00.440887 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 03:33:00 crc kubenswrapper[4980]: E0107 03:33:00.440924 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 03:33:00 crc kubenswrapper[4980]: E0107 03:33:00.440946 4980 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:33:00 crc kubenswrapper[4980]: E0107 03:33:00.441036 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:08.441008754 +0000 UTC m=+35.006703519 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.463936 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.468471 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.468524 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.468543 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.468586 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.468612 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:00Z","lastTransitionTime":"2026-01-07T03:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.479774 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.493631 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.507268 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.517992 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.541838 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ec078f4e-8312-4e23-a374-8da01dfc253a-serviceca\") pod \"node-ca-9pk7v\" (UID: \"ec078f4e-8312-4e23-a374-8da01dfc253a\") " pod="openshift-image-registry/node-ca-9pk7v" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.541901 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ec078f4e-8312-4e23-a374-8da01dfc253a-host\") pod \"node-ca-9pk7v\" (UID: \"ec078f4e-8312-4e23-a374-8da01dfc253a\") " pod="openshift-image-registry/node-ca-9pk7v" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.541970 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdx9v\" (UniqueName: \"kubernetes.io/projected/ec078f4e-8312-4e23-a374-8da01dfc253a-kube-api-access-zdx9v\") pod \"node-ca-9pk7v\" (UID: \"ec078f4e-8312-4e23-a374-8da01dfc253a\") " pod="openshift-image-registry/node-ca-9pk7v" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.553951 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.571629 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.571694 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.571707 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.571730 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.571745 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:00Z","lastTransitionTime":"2026-01-07T03:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.572328 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.610157 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.629885 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.643575 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdx9v\" (UniqueName: \"kubernetes.io/projected/ec078f4e-8312-4e23-a374-8da01dfc253a-kube-api-access-zdx9v\") pod \"node-ca-9pk7v\" (UID: \"ec078f4e-8312-4e23-a374-8da01dfc253a\") " pod="openshift-image-registry/node-ca-9pk7v" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.644086 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ec078f4e-8312-4e23-a374-8da01dfc253a-serviceca\") pod \"node-ca-9pk7v\" (UID: \"ec078f4e-8312-4e23-a374-8da01dfc253a\") " pod="openshift-image-registry/node-ca-9pk7v" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.644127 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ec078f4e-8312-4e23-a374-8da01dfc253a-host\") pod \"node-ca-9pk7v\" (UID: \"ec078f4e-8312-4e23-a374-8da01dfc253a\") " pod="openshift-image-registry/node-ca-9pk7v" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.644280 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ec078f4e-8312-4e23-a374-8da01dfc253a-host\") pod \"node-ca-9pk7v\" (UID: \"ec078f4e-8312-4e23-a374-8da01dfc253a\") " pod="openshift-image-registry/node-ca-9pk7v" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.645376 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ec078f4e-8312-4e23-a374-8da01dfc253a-serviceca\") pod \"node-ca-9pk7v\" (UID: \"ec078f4e-8312-4e23-a374-8da01dfc253a\") " pod="openshift-image-registry/node-ca-9pk7v" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.651805 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.665470 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.675273 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.675332 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.675343 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.675383 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.675397 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:00Z","lastTransitionTime":"2026-01-07T03:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.683339 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.699984 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdx9v\" (UniqueName: \"kubernetes.io/projected/ec078f4e-8312-4e23-a374-8da01dfc253a-kube-api-access-zdx9v\") pod \"node-ca-9pk7v\" (UID: \"ec078f4e-8312-4e23-a374-8da01dfc253a\") " pod="openshift-image-registry/node-ca-9pk7v" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.706642 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.728363 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.735640 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.735685 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.735707 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:00 crc kubenswrapper[4980]: E0107 03:33:00.735826 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:00 crc kubenswrapper[4980]: E0107 03:33:00.735961 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:00 crc kubenswrapper[4980]: E0107 03:33:00.736196 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.737785 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9pk7v" Jan 07 03:33:00 crc kubenswrapper[4980]: W0107 03:33:00.762916 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec078f4e_8312_4e23_a374_8da01dfc253a.slice/crio-34e7eeeba763f84c4f837bd70c4f8738d486ba6ccac59313917532d6dceba60c WatchSource:0}: Error finding container 34e7eeeba763f84c4f837bd70c4f8738d486ba6ccac59313917532d6dceba60c: Status 404 returned error can't find the container with id 34e7eeeba763f84c4f837bd70c4f8738d486ba6ccac59313917532d6dceba60c Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.777969 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.778014 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.778029 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.778052 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.778064 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:00Z","lastTransitionTime":"2026-01-07T03:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.881652 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.881728 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.881741 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.881766 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.881783 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:00Z","lastTransitionTime":"2026-01-07T03:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.961122 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerStarted","Data":"4ea2eb475a281c19cfa2e75008846e993be8adba482521153cee53237b30491c"} Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.961481 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.968086 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" event={"ID":"03d0f597-1e90-409f-8345-b641cb7342ea","Type":"ContainerStarted","Data":"02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7"} Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.969902 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9pk7v" event={"ID":"ec078f4e-8312-4e23-a374-8da01dfc253a","Type":"ContainerStarted","Data":"34e7eeeba763f84c4f837bd70c4f8738d486ba6ccac59313917532d6dceba60c"} Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.978718 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.988764 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.988823 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.988835 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.988855 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.988869 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:00Z","lastTransitionTime":"2026-01-07T03:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:00 crc kubenswrapper[4980]: I0107 03:33:00.997239 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:00Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.011296 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.012062 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.033424 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.052686 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.068157 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.091476 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.091519 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.091531 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.091571 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.091585 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:01Z","lastTransitionTime":"2026-01-07T03:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.099236 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.110546 4980 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.118287 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.147273 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea2eb475a281c19cfa2e75008846e993be8adba482521153cee53237b30491c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.162304 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.182578 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.194385 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.194888 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.194928 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.194943 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.194965 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.194978 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:01Z","lastTransitionTime":"2026-01-07T03:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.211696 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.231366 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.244408 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.261776 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.276759 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.294130 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.297532 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.297614 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.297628 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.297650 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.297667 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:01Z","lastTransitionTime":"2026-01-07T03:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.307349 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.320034 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.333354 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.354028 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.367948 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.389449 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea2eb475a281c19cfa2e75008846e993be8adba482521153cee53237b30491c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.400813 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.401104 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.401189 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.401288 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.401363 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:01Z","lastTransitionTime":"2026-01-07T03:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.406573 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.421435 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.436041 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.452359 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.469655 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.481928 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:01Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.503909 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.504012 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.504132 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.504222 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.504300 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:01Z","lastTransitionTime":"2026-01-07T03:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.607604 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.607660 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.607676 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.607699 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.607716 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:01Z","lastTransitionTime":"2026-01-07T03:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.711151 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.711502 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.711679 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.711824 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.711954 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:01Z","lastTransitionTime":"2026-01-07T03:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.814678 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.814717 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.814729 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.814744 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.814757 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:01Z","lastTransitionTime":"2026-01-07T03:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.918082 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.918142 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.918164 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.918190 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.918208 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:01Z","lastTransitionTime":"2026-01-07T03:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.975934 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9pk7v" event={"ID":"ec078f4e-8312-4e23-a374-8da01dfc253a","Type":"ContainerStarted","Data":"2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02"} Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.975989 4980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 07 03:33:01 crc kubenswrapper[4980]: I0107 03:33:01.976717 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.006952 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.010390 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.021200 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.021247 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.021261 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.021280 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.021294 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:02Z","lastTransitionTime":"2026-01-07T03:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.035143 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.052320 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.071011 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.087739 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.108239 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.123857 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.123914 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.123931 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.123954 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.123970 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:02Z","lastTransitionTime":"2026-01-07T03:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.129534 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.152188 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.188143 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.220994 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea2eb475a281c19cfa2e75008846e993be8adba482521153cee53237b30491c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.226711 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.226747 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.226758 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.226773 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.226785 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:02Z","lastTransitionTime":"2026-01-07T03:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.252062 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.272870 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.293531 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.314619 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.329770 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.329824 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.329844 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.329878 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.329897 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:02Z","lastTransitionTime":"2026-01-07T03:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.339535 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.375292 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.398963 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.422758 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.431564 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.431588 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.431596 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.431609 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.431619 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:02Z","lastTransitionTime":"2026-01-07T03:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.434085 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.447934 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.461468 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.472148 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.491735 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea2eb475a281c19cfa2e75008846e993be8adba482521153cee53237b30491c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.509377 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.522160 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.533797 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.533849 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.533859 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.533874 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.533883 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:02Z","lastTransitionTime":"2026-01-07T03:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.536643 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.556269 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.566582 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.579634 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.593647 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:02Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.635900 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.635927 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.635935 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.635950 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.635960 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:02Z","lastTransitionTime":"2026-01-07T03:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.735069 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:02 crc kubenswrapper[4980]: E0107 03:33:02.735218 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.735286 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:02 crc kubenswrapper[4980]: E0107 03:33:02.735332 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.735389 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:02 crc kubenswrapper[4980]: E0107 03:33:02.735429 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.738148 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.738169 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.738177 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.738187 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.738196 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:02Z","lastTransitionTime":"2026-01-07T03:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.840330 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.840385 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.840401 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.840418 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.840430 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:02Z","lastTransitionTime":"2026-01-07T03:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.942548 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.942621 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.942636 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.942656 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.942672 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:02Z","lastTransitionTime":"2026-01-07T03:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:02 crc kubenswrapper[4980]: I0107 03:33:02.979180 4980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.045885 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.045943 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.045952 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.045975 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.045986 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:03Z","lastTransitionTime":"2026-01-07T03:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.148336 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.148387 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.148398 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.148420 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.148431 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:03Z","lastTransitionTime":"2026-01-07T03:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.251101 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.251171 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.251189 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.251221 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.251245 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:03Z","lastTransitionTime":"2026-01-07T03:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.353884 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.353955 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.353974 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.354004 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.354023 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:03Z","lastTransitionTime":"2026-01-07T03:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.457479 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.457579 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.457598 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.457627 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.457650 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:03Z","lastTransitionTime":"2026-01-07T03:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.561137 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.561474 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.561640 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.561781 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.561906 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:03Z","lastTransitionTime":"2026-01-07T03:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.665432 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.665848 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.666023 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.666162 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.666288 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:03Z","lastTransitionTime":"2026-01-07T03:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.769239 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.769338 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.769137 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.769356 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.769597 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.769626 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:03Z","lastTransitionTime":"2026-01-07T03:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.787964 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.818580 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea2eb475a281c19cfa2e75008846e993be8adba482521153cee53237b30491c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.844083 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.867099 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.872246 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.872462 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.872616 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.872742 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.872870 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:03Z","lastTransitionTime":"2026-01-07T03:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.892331 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.915259 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.927784 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.949305 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.969459 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.975858 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.975917 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.975930 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.975953 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.975968 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:03Z","lastTransitionTime":"2026-01-07T03:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.983854 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovnkube-controller/0.log" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.988534 4980 generic.go:334] "Generic (PLEG): container finished" podID="6c962a95-c8ed-4d65-810e-1da967416c06" containerID="4ea2eb475a281c19cfa2e75008846e993be8adba482521153cee53237b30491c" exitCode=1 Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.988656 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerDied","Data":"4ea2eb475a281c19cfa2e75008846e993be8adba482521153cee53237b30491c"} Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.990228 4980 scope.go:117] "RemoveContainer" containerID="4ea2eb475a281c19cfa2e75008846e993be8adba482521153cee53237b30491c" Jan 07 03:33:03 crc kubenswrapper[4980]: I0107 03:33:03.991917 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.006864 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.021852 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.037675 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.055989 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.078195 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.079732 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.079780 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.079797 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.079823 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.079841 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:04Z","lastTransitionTime":"2026-01-07T03:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.094644 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.116346 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.139933 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.158745 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.182848 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.183487 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.183546 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.183593 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.183618 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.183638 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:04Z","lastTransitionTime":"2026-01-07T03:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.205472 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.224248 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.243146 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.277865 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.286230 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.286342 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.286369 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.286400 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.286421 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:04Z","lastTransitionTime":"2026-01-07T03:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.324202 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.360344 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea2eb475a281c19cfa2e75008846e993be8adba482521153cee53237b30491c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea2eb475a281c19cfa2e75008846e993be8adba482521153cee53237b30491c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:03Z\\\",\\\"message\\\":\\\".io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0107 03:33:03.255532 6326 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0107 03:33:03.255960 6326 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:03.256157 6326 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0107 03:33:03.256196 6326 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:03.256470 6326 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0107 03:33:03.256503 6326 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0107 03:33:03.256633 6326 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:03.256753 6326 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.378451 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.388274 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.388755 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.388792 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.388804 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.388819 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.388831 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:04Z","lastTransitionTime":"2026-01-07T03:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.401920 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.491527 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.491596 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.491614 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.491637 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.491654 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:04Z","lastTransitionTime":"2026-01-07T03:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.594409 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.594460 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.594478 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.594503 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.594521 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:04Z","lastTransitionTime":"2026-01-07T03:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.696865 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.696898 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.696929 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.696946 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.696958 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:04Z","lastTransitionTime":"2026-01-07T03:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.734977 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:04 crc kubenswrapper[4980]: E0107 03:33:04.735087 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.735399 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:04 crc kubenswrapper[4980]: E0107 03:33:04.735463 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.735504 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:04 crc kubenswrapper[4980]: E0107 03:33:04.735574 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.803995 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.804064 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.804083 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.804106 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.804123 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:04Z","lastTransitionTime":"2026-01-07T03:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.906484 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.906533 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.906545 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.906577 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.906589 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:04Z","lastTransitionTime":"2026-01-07T03:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.995865 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovnkube-controller/0.log" Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.999528 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerStarted","Data":"df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d"} Jan 07 03:33:04 crc kubenswrapper[4980]: I0107 03:33:04.999726 4980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.009015 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.009087 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.009102 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.009125 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.009140 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:05Z","lastTransitionTime":"2026-01-07T03:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.018041 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:05Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.043312 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:05Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.062131 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:05Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.082435 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:05Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.104280 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:05Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.112639 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.112690 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.112706 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.112727 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.112741 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:05Z","lastTransitionTime":"2026-01-07T03:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.120238 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:05Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.154325 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:05Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.168986 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:05Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.196408 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea2eb475a281c19cfa2e75008846e993be8adba482521153cee53237b30491c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:03Z\\\",\\\"message\\\":\\\".io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0107 03:33:03.255532 6326 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0107 03:33:03.255960 6326 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:03.256157 6326 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0107 03:33:03.256196 6326 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:03.256470 6326 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0107 03:33:03.256503 6326 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0107 03:33:03.256633 6326 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:03.256753 6326 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:05Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.212018 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:05Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.214816 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.214848 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.214862 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.214880 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.214891 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:05Z","lastTransitionTime":"2026-01-07T03:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.229438 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:05Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.247951 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:05Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.266917 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:05Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.293640 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:05Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.308239 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:05Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.317659 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.317701 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.317773 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.317796 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.317814 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:05Z","lastTransitionTime":"2026-01-07T03:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.421390 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.421445 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.421457 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.421480 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.421493 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:05Z","lastTransitionTime":"2026-01-07T03:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.524047 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.524137 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.524157 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.524206 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.524228 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:05Z","lastTransitionTime":"2026-01-07T03:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.626842 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.626911 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.626923 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.626941 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.626953 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:05Z","lastTransitionTime":"2026-01-07T03:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.729721 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.729834 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.729852 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.729875 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.729892 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:05Z","lastTransitionTime":"2026-01-07T03:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.833808 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.833881 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.833898 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.833923 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.833941 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:05Z","lastTransitionTime":"2026-01-07T03:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.959653 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.959701 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.959719 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.959742 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:05 crc kubenswrapper[4980]: I0107 03:33:05.959759 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:05Z","lastTransitionTime":"2026-01-07T03:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.007443 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovnkube-controller/1.log" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.008448 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovnkube-controller/0.log" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.011641 4980 generic.go:334] "Generic (PLEG): container finished" podID="6c962a95-c8ed-4d65-810e-1da967416c06" containerID="df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d" exitCode=1 Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.011686 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerDied","Data":"df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d"} Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.011743 4980 scope.go:117] "RemoveContainer" containerID="4ea2eb475a281c19cfa2e75008846e993be8adba482521153cee53237b30491c" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.015347 4980 scope.go:117] "RemoveContainer" containerID="df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d" Jan 07 03:33:06 crc kubenswrapper[4980]: E0107 03:33:06.020071 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.041958 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.064544 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.064643 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.064660 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.064683 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.064701 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:06Z","lastTransitionTime":"2026-01-07T03:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.076067 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea2eb475a281c19cfa2e75008846e993be8adba482521153cee53237b30491c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:03Z\\\",\\\"message\\\":\\\".io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0107 03:33:03.255532 6326 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0107 03:33:03.255960 6326 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:03.256157 6326 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0107 03:33:03.256196 6326 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:03.256470 6326 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0107 03:33:03.256503 6326 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0107 03:33:03.256633 6326 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:03.256753 6326 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:05Z\\\",\\\"message\\\":\\\"userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:05.151793 6482 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0107 03:33:05.151816 6482 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0107 03:33:05.151850 6482 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0107 03:33:05.151869 6482 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0107 03:33:05.152641 6482 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0107 03:33:05.152732 6482 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0107 03:33:05.152738 6482 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0107 03:33:05.152772 6482 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0107 03:33:05.152824 6482 factory.go:656] Stopping watch factory\\\\nI0107 03:33:05.152839 6482 ovnkube.go:599] Stopped ovnkube\\\\nI0107 03:33:05.152868 6482 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0107 03:33:05.152884 6482 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0107 03:33:05.152892 6482 handler.go:208] Removed *v1.Node event handler 7\\\\nI0107 03:33:05.152900 6482 handler.go:208] Removed *v1.Node event handler 2\\\\nI0107 03:33:05.152924 6482 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0107 03:33:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.113518 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.134189 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.151794 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.167641 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.168127 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.168167 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.168183 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.168207 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.168225 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:06Z","lastTransitionTime":"2026-01-07T03:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.183372 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.201957 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.222367 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.238741 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.259319 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.270444 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.270491 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.270507 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.270528 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.270545 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:06Z","lastTransitionTime":"2026-01-07T03:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.277271 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.290324 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.309291 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.325837 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.372596 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.372650 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.372662 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.372680 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.372693 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:06Z","lastTransitionTime":"2026-01-07T03:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.475014 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.475082 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.475100 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.475129 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.475147 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:06Z","lastTransitionTime":"2026-01-07T03:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.578955 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.579025 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.579047 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.579070 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.579086 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:06Z","lastTransitionTime":"2026-01-07T03:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.656988 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.657049 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.657073 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.657101 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.657118 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:06Z","lastTransitionTime":"2026-01-07T03:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:06 crc kubenswrapper[4980]: E0107 03:33:06.677184 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.682374 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.682435 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.682458 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.682485 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.682503 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:06Z","lastTransitionTime":"2026-01-07T03:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:06 crc kubenswrapper[4980]: E0107 03:33:06.701436 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.705767 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.705813 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.705829 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.705849 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.705866 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:06Z","lastTransitionTime":"2026-01-07T03:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:06 crc kubenswrapper[4980]: E0107 03:33:06.725176 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.731209 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.731248 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.731264 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.731283 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.731299 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:06Z","lastTransitionTime":"2026-01-07T03:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.734691 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.734717 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.734691 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:06 crc kubenswrapper[4980]: E0107 03:33:06.734854 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:06 crc kubenswrapper[4980]: E0107 03:33:06.734952 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:06 crc kubenswrapper[4980]: E0107 03:33:06.735049 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:06 crc kubenswrapper[4980]: E0107 03:33:06.752441 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.757236 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.757296 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.757314 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.757336 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.757353 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:06Z","lastTransitionTime":"2026-01-07T03:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:06 crc kubenswrapper[4980]: E0107 03:33:06.776509 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:06 crc kubenswrapper[4980]: E0107 03:33:06.776761 4980 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.778851 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.778897 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.778914 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.778938 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.778954 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:06Z","lastTransitionTime":"2026-01-07T03:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.881961 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.882020 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.882033 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.882053 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.882069 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:06Z","lastTransitionTime":"2026-01-07T03:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.990215 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.990289 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.990311 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.990338 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:06 crc kubenswrapper[4980]: I0107 03:33:06.990360 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:06Z","lastTransitionTime":"2026-01-07T03:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.017229 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovnkube-controller/1.log" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.022472 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct"] Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.023211 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.026055 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.026172 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.028421 4980 scope.go:117] "RemoveContainer" containerID="df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d" Jan 07 03:33:07 crc kubenswrapper[4980]: E0107 03:33:07.028678 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.050384 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.081990 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea2eb475a281c19cfa2e75008846e993be8adba482521153cee53237b30491c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:03Z\\\",\\\"message\\\":\\\".io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0107 03:33:03.255532 6326 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0107 03:33:03.255960 6326 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:03.256157 6326 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0107 03:33:03.256196 6326 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:03.256470 6326 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0107 03:33:03.256503 6326 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0107 03:33:03.256633 6326 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:03.256753 6326 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:05Z\\\",\\\"message\\\":\\\"userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:05.151793 6482 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0107 03:33:05.151816 6482 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0107 03:33:05.151850 6482 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0107 03:33:05.151869 6482 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0107 03:33:05.152641 6482 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0107 03:33:05.152732 6482 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0107 03:33:05.152738 6482 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0107 03:33:05.152772 6482 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0107 03:33:05.152824 6482 factory.go:656] Stopping watch factory\\\\nI0107 03:33:05.152839 6482 ovnkube.go:599] Stopped ovnkube\\\\nI0107 03:33:05.152868 6482 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0107 03:33:05.152884 6482 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0107 03:33:05.152892 6482 handler.go:208] Removed *v1.Node event handler 7\\\\nI0107 03:33:05.152900 6482 handler.go:208] Removed *v1.Node event handler 2\\\\nI0107 03:33:05.152924 6482 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0107 03:33:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.093902 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.093974 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.093993 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.094394 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.094444 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:07Z","lastTransitionTime":"2026-01-07T03:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.114758 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.115636 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f887efc-b87f-4d3c-a077-f0c083487518-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tqvct\" (UID: \"6f887efc-b87f-4d3c-a077-f0c083487518\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.115699 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f887efc-b87f-4d3c-a077-f0c083487518-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tqvct\" (UID: \"6f887efc-b87f-4d3c-a077-f0c083487518\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.116780 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f887efc-b87f-4d3c-a077-f0c083487518-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tqvct\" (UID: \"6f887efc-b87f-4d3c-a077-f0c083487518\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.116883 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfrdt\" (UniqueName: \"kubernetes.io/projected/6f887efc-b87f-4d3c-a077-f0c083487518-kube-api-access-lfrdt\") pod \"ovnkube-control-plane-749d76644c-tqvct\" (UID: \"6f887efc-b87f-4d3c-a077-f0c083487518\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.133378 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.152836 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.172931 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.189312 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.198218 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.198304 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.198321 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.198383 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.198404 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:07Z","lastTransitionTime":"2026-01-07T03:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.206038 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.217301 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f887efc-b87f-4d3c-a077-f0c083487518-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tqvct\" (UID: \"6f887efc-b87f-4d3c-a077-f0c083487518\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.217375 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f887efc-b87f-4d3c-a077-f0c083487518-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tqvct\" (UID: \"6f887efc-b87f-4d3c-a077-f0c083487518\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.217470 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f887efc-b87f-4d3c-a077-f0c083487518-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tqvct\" (UID: \"6f887efc-b87f-4d3c-a077-f0c083487518\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.217515 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfrdt\" (UniqueName: \"kubernetes.io/projected/6f887efc-b87f-4d3c-a077-f0c083487518-kube-api-access-lfrdt\") pod \"ovnkube-control-plane-749d76644c-tqvct\" (UID: \"6f887efc-b87f-4d3c-a077-f0c083487518\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.218292 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f887efc-b87f-4d3c-a077-f0c083487518-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tqvct\" (UID: \"6f887efc-b87f-4d3c-a077-f0c083487518\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.218424 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f887efc-b87f-4d3c-a077-f0c083487518-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tqvct\" (UID: \"6f887efc-b87f-4d3c-a077-f0c083487518\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.230362 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.230636 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f887efc-b87f-4d3c-a077-f0c083487518-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tqvct\" (UID: \"6f887efc-b87f-4d3c-a077-f0c083487518\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.251153 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfrdt\" (UniqueName: \"kubernetes.io/projected/6f887efc-b87f-4d3c-a077-f0c083487518-kube-api-access-lfrdt\") pod \"ovnkube-control-plane-749d76644c-tqvct\" (UID: \"6f887efc-b87f-4d3c-a077-f0c083487518\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.253997 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.274899 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.294254 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.302187 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.302235 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.302253 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.302280 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.302300 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:07Z","lastTransitionTime":"2026-01-07T03:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.315408 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.332471 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.353874 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.363755 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.374490 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: W0107 03:33:07.384124 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f887efc_b87f_4d3c_a077_f0c083487518.slice/crio-37403f2b379ac4892568336b415b8830ad435251c7ba9af61e1a74eb384c4d20 WatchSource:0}: Error finding container 37403f2b379ac4892568336b415b8830ad435251c7ba9af61e1a74eb384c4d20: Status 404 returned error can't find the container with id 37403f2b379ac4892568336b415b8830ad435251c7ba9af61e1a74eb384c4d20 Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.402426 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.405255 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.405301 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.405321 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.405348 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.405365 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:07Z","lastTransitionTime":"2026-01-07T03:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.432688 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.450499 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.471010 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.486104 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.498985 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.508470 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.508526 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.508611 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.508647 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.508665 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:07Z","lastTransitionTime":"2026-01-07T03:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.510727 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.528670 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.547848 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.561600 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.596631 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.611362 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.611413 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.611430 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.611455 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.611475 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:07Z","lastTransitionTime":"2026-01-07T03:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.613305 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.640403 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:05Z\\\",\\\"message\\\":\\\"userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:05.151793 6482 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0107 03:33:05.151816 6482 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0107 03:33:05.151850 6482 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0107 03:33:05.151869 6482 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0107 03:33:05.152641 6482 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0107 03:33:05.152732 6482 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0107 03:33:05.152738 6482 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0107 03:33:05.152772 6482 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0107 03:33:05.152824 6482 factory.go:656] Stopping watch factory\\\\nI0107 03:33:05.152839 6482 ovnkube.go:599] Stopped ovnkube\\\\nI0107 03:33:05.152868 6482 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0107 03:33:05.152884 6482 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0107 03:33:05.152892 6482 handler.go:208] Removed *v1.Node event handler 7\\\\nI0107 03:33:05.152900 6482 handler.go:208] Removed *v1.Node event handler 2\\\\nI0107 03:33:05.152924 6482 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0107 03:33:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.660935 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.683116 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.698860 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.714383 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.714461 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.714479 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.714538 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.714613 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:07Z","lastTransitionTime":"2026-01-07T03:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.817210 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.817248 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.817259 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.817278 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.817291 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:07Z","lastTransitionTime":"2026-01-07T03:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.920423 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.920505 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.920532 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.920596 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:07 crc kubenswrapper[4980]: I0107 03:33:07.920624 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:07Z","lastTransitionTime":"2026-01-07T03:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.023619 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.023667 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.023679 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.023694 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.023707 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:08Z","lastTransitionTime":"2026-01-07T03:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.032672 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" event={"ID":"6f887efc-b87f-4d3c-a077-f0c083487518","Type":"ContainerStarted","Data":"0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca"} Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.032728 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" event={"ID":"6f887efc-b87f-4d3c-a077-f0c083487518","Type":"ContainerStarted","Data":"6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481"} Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.032742 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" event={"ID":"6f887efc-b87f-4d3c-a077-f0c083487518","Type":"ContainerStarted","Data":"37403f2b379ac4892568336b415b8830ad435251c7ba9af61e1a74eb384c4d20"} Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.051598 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.074422 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:05Z\\\",\\\"message\\\":\\\"userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:05.151793 6482 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0107 03:33:05.151816 6482 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0107 03:33:05.151850 6482 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0107 03:33:05.151869 6482 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0107 03:33:05.152641 6482 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0107 03:33:05.152732 6482 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0107 03:33:05.152738 6482 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0107 03:33:05.152772 6482 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0107 03:33:05.152824 6482 factory.go:656] Stopping watch factory\\\\nI0107 03:33:05.152839 6482 ovnkube.go:599] Stopped ovnkube\\\\nI0107 03:33:05.152868 6482 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0107 03:33:05.152884 6482 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0107 03:33:05.152892 6482 handler.go:208] Removed *v1.Node event handler 7\\\\nI0107 03:33:05.152900 6482 handler.go:208] Removed *v1.Node event handler 2\\\\nI0107 03:33:05.152924 6482 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0107 03:33:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.093993 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.107184 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.123615 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.126909 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.126957 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.126969 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.126985 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.126997 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:08Z","lastTransitionTime":"2026-01-07T03:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.138375 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.151665 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.165252 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.177441 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-j75z7"] Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.177971 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.178040 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.179645 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.196433 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.211909 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.223017 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.234368 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.234410 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.234420 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.234436 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.234446 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:08Z","lastTransitionTime":"2026-01-07T03:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.237936 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.249183 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.264396 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.278526 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.295406 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.308745 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.326349 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.334187 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs\") pod \"network-metrics-daemon-j75z7\" (UID: \"1e3c7945-f3cb-4af2-8a0f-19b014123f74\") " pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.334527 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtlz9\" (UniqueName: \"kubernetes.io/projected/1e3c7945-f3cb-4af2-8a0f-19b014123f74-kube-api-access-vtlz9\") pod \"network-metrics-daemon-j75z7\" (UID: \"1e3c7945-f3cb-4af2-8a0f-19b014123f74\") " pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.343169 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.343214 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.343226 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.343241 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.343250 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:08Z","lastTransitionTime":"2026-01-07T03:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.346342 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.362045 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.377127 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.403054 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.419732 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.435743 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.435936 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:33:24.435911374 +0000 UTC m=+51.001606139 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.436130 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.436226 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.436277 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.436316 4980 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.436359 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtlz9\" (UniqueName: \"kubernetes.io/projected/1e3c7945-f3cb-4af2-8a0f-19b014123f74-kube-api-access-vtlz9\") pod \"network-metrics-daemon-j75z7\" (UID: \"1e3c7945-f3cb-4af2-8a0f-19b014123f74\") " pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.436384 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:24.436368729 +0000 UTC m=+51.002063494 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.436654 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs\") pod \"network-metrics-daemon-j75z7\" (UID: \"1e3c7945-f3cb-4af2-8a0f-19b014123f74\") " pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.436696 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.436734 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.436750 4980 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.436757 4980 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.436812 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:24.436793932 +0000 UTC m=+51.002488667 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.436850 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:24.436821123 +0000 UTC m=+51.002515898 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.436935 4980 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.437039 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs podName:1e3c7945-f3cb-4af2-8a0f-19b014123f74 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:08.937009199 +0000 UTC m=+35.502703974 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs") pod "network-metrics-daemon-j75z7" (UID: "1e3c7945-f3cb-4af2-8a0f-19b014123f74") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.445723 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.445799 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.445816 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.445844 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.445863 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:08Z","lastTransitionTime":"2026-01-07T03:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.451894 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:05Z\\\",\\\"message\\\":\\\"userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:05.151793 6482 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0107 03:33:05.151816 6482 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0107 03:33:05.151850 6482 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0107 03:33:05.151869 6482 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0107 03:33:05.152641 6482 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0107 03:33:05.152732 6482 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0107 03:33:05.152738 6482 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0107 03:33:05.152772 6482 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0107 03:33:05.152824 6482 factory.go:656] Stopping watch factory\\\\nI0107 03:33:05.152839 6482 ovnkube.go:599] Stopped ovnkube\\\\nI0107 03:33:05.152868 6482 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0107 03:33:05.152884 6482 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0107 03:33:05.152892 6482 handler.go:208] Removed *v1.Node event handler 7\\\\nI0107 03:33:05.152900 6482 handler.go:208] Removed *v1.Node event handler 2\\\\nI0107 03:33:05.152924 6482 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0107 03:33:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.463543 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtlz9\" (UniqueName: \"kubernetes.io/projected/1e3c7945-f3cb-4af2-8a0f-19b014123f74-kube-api-access-vtlz9\") pod \"network-metrics-daemon-j75z7\" (UID: \"1e3c7945-f3cb-4af2-8a0f-19b014123f74\") " pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.470619 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j75z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e3c7945-f3cb-4af2-8a0f-19b014123f74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j75z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.490528 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.509925 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.528456 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.538136 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.538327 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.538364 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.538384 4980 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.538458 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:24.53843686 +0000 UTC m=+51.104131635 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.545871 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.547965 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.548008 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.548021 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.548043 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.548056 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:08Z","lastTransitionTime":"2026-01-07T03:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.564415 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.578265 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.594682 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.651289 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.651360 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.651379 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.651408 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.651430 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:08Z","lastTransitionTime":"2026-01-07T03:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.735206 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.735350 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.735437 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.735211 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.735609 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.735678 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.755688 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.755751 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.755771 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.755802 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.755820 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:08Z","lastTransitionTime":"2026-01-07T03:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.859305 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.859378 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.859398 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.859424 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.859442 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:08Z","lastTransitionTime":"2026-01-07T03:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.943446 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs\") pod \"network-metrics-daemon-j75z7\" (UID: \"1e3c7945-f3cb-4af2-8a0f-19b014123f74\") " pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.943736 4980 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 03:33:08 crc kubenswrapper[4980]: E0107 03:33:08.943874 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs podName:1e3c7945-f3cb-4af2-8a0f-19b014123f74 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:09.943844658 +0000 UTC m=+36.509539423 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs") pod "network-metrics-daemon-j75z7" (UID: "1e3c7945-f3cb-4af2-8a0f-19b014123f74") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.962429 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.962480 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.962494 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.962521 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:08 crc kubenswrapper[4980]: I0107 03:33:08.962541 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:08Z","lastTransitionTime":"2026-01-07T03:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.066046 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.066107 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.066126 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.066153 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.066183 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:09Z","lastTransitionTime":"2026-01-07T03:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.169773 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.169825 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.169842 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.169873 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.169894 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:09Z","lastTransitionTime":"2026-01-07T03:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.272681 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.272738 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.272756 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.272785 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.272803 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:09Z","lastTransitionTime":"2026-01-07T03:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.376087 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.376165 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.376190 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.376221 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.376243 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:09Z","lastTransitionTime":"2026-01-07T03:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.479800 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.479869 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.479890 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.479916 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.479933 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:09Z","lastTransitionTime":"2026-01-07T03:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.583212 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.583257 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.583273 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.583296 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.583312 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:09Z","lastTransitionTime":"2026-01-07T03:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.686308 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.686362 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.686383 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.686407 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.686425 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:09Z","lastTransitionTime":"2026-01-07T03:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.735069 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:09 crc kubenswrapper[4980]: E0107 03:33:09.735253 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.789674 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.789739 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.789763 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.789789 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.789807 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:09Z","lastTransitionTime":"2026-01-07T03:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.893331 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.893377 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.893392 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.893421 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.893434 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:09Z","lastTransitionTime":"2026-01-07T03:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.954609 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs\") pod \"network-metrics-daemon-j75z7\" (UID: \"1e3c7945-f3cb-4af2-8a0f-19b014123f74\") " pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:09 crc kubenswrapper[4980]: E0107 03:33:09.954821 4980 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 03:33:09 crc kubenswrapper[4980]: E0107 03:33:09.954934 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs podName:1e3c7945-f3cb-4af2-8a0f-19b014123f74 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:11.954914516 +0000 UTC m=+38.520609261 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs") pod "network-metrics-daemon-j75z7" (UID: "1e3c7945-f3cb-4af2-8a0f-19b014123f74") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.995756 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.995824 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.995845 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.995873 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:09 crc kubenswrapper[4980]: I0107 03:33:09.995891 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:09Z","lastTransitionTime":"2026-01-07T03:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.098975 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.099040 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.099059 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.099083 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.099100 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:10Z","lastTransitionTime":"2026-01-07T03:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.179159 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.202480 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.202585 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.202613 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.202642 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.202664 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:10Z","lastTransitionTime":"2026-01-07T03:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.211837 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:05Z\\\",\\\"message\\\":\\\"userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:05.151793 6482 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0107 03:33:05.151816 6482 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0107 03:33:05.151850 6482 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0107 03:33:05.151869 6482 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0107 03:33:05.152641 6482 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0107 03:33:05.152732 6482 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0107 03:33:05.152738 6482 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0107 03:33:05.152772 6482 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0107 03:33:05.152824 6482 factory.go:656] Stopping watch factory\\\\nI0107 03:33:05.152839 6482 ovnkube.go:599] Stopped ovnkube\\\\nI0107 03:33:05.152868 6482 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0107 03:33:05.152884 6482 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0107 03:33:05.152892 6482 handler.go:208] Removed *v1.Node event handler 7\\\\nI0107 03:33:05.152900 6482 handler.go:208] Removed *v1.Node event handler 2\\\\nI0107 03:33:05.152924 6482 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0107 03:33:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:10Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.228671 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j75z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e3c7945-f3cb-4af2-8a0f-19b014123f74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j75z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:10Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.263258 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:10Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.283071 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:10Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.302032 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:10Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.305045 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.305098 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.305115 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.305139 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.305157 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:10Z","lastTransitionTime":"2026-01-07T03:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.322615 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:10Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.339914 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:10Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.362802 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:10Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.386135 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:10Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.402114 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:10Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.407738 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.407798 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.407816 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.407841 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.407858 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:10Z","lastTransitionTime":"2026-01-07T03:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.418999 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:10Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.436530 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:10Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.450432 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:10Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.469699 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:10Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.490997 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:10Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.510600 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:10Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.511231 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.511296 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.511316 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.511343 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.511360 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:10Z","lastTransitionTime":"2026-01-07T03:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.529740 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:10Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.613906 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.613969 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.613986 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.614012 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.614035 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:10Z","lastTransitionTime":"2026-01-07T03:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.717059 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.717133 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.717159 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.717190 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.717212 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:10Z","lastTransitionTime":"2026-01-07T03:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.735444 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.735480 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.735492 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:10 crc kubenswrapper[4980]: E0107 03:33:10.735646 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:10 crc kubenswrapper[4980]: E0107 03:33:10.735761 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:10 crc kubenswrapper[4980]: E0107 03:33:10.735882 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.820299 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.820658 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.820847 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.820985 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.821117 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:10Z","lastTransitionTime":"2026-01-07T03:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.924273 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.924485 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.924856 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.925046 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:10 crc kubenswrapper[4980]: I0107 03:33:10.925203 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:10Z","lastTransitionTime":"2026-01-07T03:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.028753 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.028818 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.028842 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.028868 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.028885 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:11Z","lastTransitionTime":"2026-01-07T03:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.131536 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.131602 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.131614 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.131633 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.131645 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:11Z","lastTransitionTime":"2026-01-07T03:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.234193 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.234278 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.234296 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.234329 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.234350 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:11Z","lastTransitionTime":"2026-01-07T03:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.337160 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.337475 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.337587 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.337692 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.337795 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:11Z","lastTransitionTime":"2026-01-07T03:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.441483 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.441534 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.441550 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.441608 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.441625 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:11Z","lastTransitionTime":"2026-01-07T03:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.544097 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.544124 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.544132 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.544145 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.544153 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:11Z","lastTransitionTime":"2026-01-07T03:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.647395 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.647766 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.647857 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.647939 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.648020 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:11Z","lastTransitionTime":"2026-01-07T03:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.734765 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:11 crc kubenswrapper[4980]: E0107 03:33:11.734905 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.751091 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.751437 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.751606 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.751747 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.751889 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:11Z","lastTransitionTime":"2026-01-07T03:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.855001 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.855059 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.855076 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.855102 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.855119 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:11Z","lastTransitionTime":"2026-01-07T03:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.957221 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.957288 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.957304 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.957327 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:11 crc kubenswrapper[4980]: I0107 03:33:11.957343 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:11Z","lastTransitionTime":"2026-01-07T03:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.011341 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs\") pod \"network-metrics-daemon-j75z7\" (UID: \"1e3c7945-f3cb-4af2-8a0f-19b014123f74\") " pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:12 crc kubenswrapper[4980]: E0107 03:33:12.011611 4980 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 03:33:12 crc kubenswrapper[4980]: E0107 03:33:12.012011 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs podName:1e3c7945-f3cb-4af2-8a0f-19b014123f74 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:16.011978474 +0000 UTC m=+42.577673249 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs") pod "network-metrics-daemon-j75z7" (UID: "1e3c7945-f3cb-4af2-8a0f-19b014123f74") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.060701 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.060761 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.060777 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.060804 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.060824 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:12Z","lastTransitionTime":"2026-01-07T03:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.164061 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.164377 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.164542 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.164740 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.164860 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:12Z","lastTransitionTime":"2026-01-07T03:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.268067 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.268126 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.268143 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.268168 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.268185 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:12Z","lastTransitionTime":"2026-01-07T03:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.371162 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.371250 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.371267 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.371323 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.371342 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:12Z","lastTransitionTime":"2026-01-07T03:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.474271 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.474356 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.474374 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.474398 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.474415 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:12Z","lastTransitionTime":"2026-01-07T03:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.576480 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.576533 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.576629 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.576664 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.576719 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:12Z","lastTransitionTime":"2026-01-07T03:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.658154 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.659481 4980 scope.go:117] "RemoveContainer" containerID="df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d" Jan 07 03:33:12 crc kubenswrapper[4980]: E0107 03:33:12.659755 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.679383 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.679713 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.679891 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.680023 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.680137 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:12Z","lastTransitionTime":"2026-01-07T03:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.735200 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.735262 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.735278 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:12 crc kubenswrapper[4980]: E0107 03:33:12.736288 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:12 crc kubenswrapper[4980]: E0107 03:33:12.736340 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:12 crc kubenswrapper[4980]: E0107 03:33:12.736390 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.783754 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.783806 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.783829 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.783860 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.783882 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:12Z","lastTransitionTime":"2026-01-07T03:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.886188 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.886534 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.886793 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.887041 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.887239 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:12Z","lastTransitionTime":"2026-01-07T03:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.990500 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.990625 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.990647 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.990676 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:12 crc kubenswrapper[4980]: I0107 03:33:12.990698 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:12Z","lastTransitionTime":"2026-01-07T03:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.094061 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.094133 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.094158 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.094189 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.094209 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:13Z","lastTransitionTime":"2026-01-07T03:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.197042 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.197099 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.197116 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.197139 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.197156 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:13Z","lastTransitionTime":"2026-01-07T03:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.299165 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.299216 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.299229 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.299244 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.299254 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:13Z","lastTransitionTime":"2026-01-07T03:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.402249 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.402289 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.402301 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.402318 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.402330 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:13Z","lastTransitionTime":"2026-01-07T03:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.531728 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.531781 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.531794 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.531811 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.532150 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:13Z","lastTransitionTime":"2026-01-07T03:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.634599 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.634647 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.634662 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.634679 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.634692 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:13Z","lastTransitionTime":"2026-01-07T03:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.735658 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:13 crc kubenswrapper[4980]: E0107 03:33:13.735871 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.738181 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.738233 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.738251 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.738278 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.738296 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:13Z","lastTransitionTime":"2026-01-07T03:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.752194 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.771308 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.792134 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.809219 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.826539 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.839671 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.839708 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.839715 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.839729 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.839737 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:13Z","lastTransitionTime":"2026-01-07T03:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.844694 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.861591 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.875465 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.893698 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.906362 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.922104 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.941486 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.941547 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.941601 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.941627 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.941656 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:13Z","lastTransitionTime":"2026-01-07T03:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.950831 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:05Z\\\",\\\"message\\\":\\\"userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:05.151793 6482 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0107 03:33:05.151816 6482 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0107 03:33:05.151850 6482 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0107 03:33:05.151869 6482 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0107 03:33:05.152641 6482 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0107 03:33:05.152732 6482 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0107 03:33:05.152738 6482 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0107 03:33:05.152772 6482 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0107 03:33:05.152824 6482 factory.go:656] Stopping watch factory\\\\nI0107 03:33:05.152839 6482 ovnkube.go:599] Stopped ovnkube\\\\nI0107 03:33:05.152868 6482 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0107 03:33:05.152884 6482 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0107 03:33:05.152892 6482 handler.go:208] Removed *v1.Node event handler 7\\\\nI0107 03:33:05.152900 6482 handler.go:208] Removed *v1.Node event handler 2\\\\nI0107 03:33:05.152924 6482 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0107 03:33:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.966461 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j75z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e3c7945-f3cb-4af2-8a0f-19b014123f74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j75z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:13 crc kubenswrapper[4980]: I0107 03:33:13.987813 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.006081 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:14Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.025419 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:14Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.045344 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.045410 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.045430 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.045455 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.045471 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:14Z","lastTransitionTime":"2026-01-07T03:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.045614 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:14Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.148640 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.148713 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.148731 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.148754 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.148773 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:14Z","lastTransitionTime":"2026-01-07T03:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.251242 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.251310 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.251333 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.251362 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.251384 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:14Z","lastTransitionTime":"2026-01-07T03:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.354481 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.354550 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.354604 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.354635 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.354657 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:14Z","lastTransitionTime":"2026-01-07T03:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.457941 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.458000 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.458019 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.458043 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.458060 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:14Z","lastTransitionTime":"2026-01-07T03:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.560738 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.561170 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.561328 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.561473 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.561645 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:14Z","lastTransitionTime":"2026-01-07T03:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.664082 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.664181 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.664192 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.664209 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.664220 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:14Z","lastTransitionTime":"2026-01-07T03:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.734960 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.735006 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.734983 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:14 crc kubenswrapper[4980]: E0107 03:33:14.735128 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:14 crc kubenswrapper[4980]: E0107 03:33:14.735296 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:14 crc kubenswrapper[4980]: E0107 03:33:14.735419 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.766279 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.766324 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.766341 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.766365 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.766382 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:14Z","lastTransitionTime":"2026-01-07T03:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.868805 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.868893 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.868917 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.868950 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.868974 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:14Z","lastTransitionTime":"2026-01-07T03:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.972136 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.972198 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.972214 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.972239 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:14 crc kubenswrapper[4980]: I0107 03:33:14.972256 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:14Z","lastTransitionTime":"2026-01-07T03:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.075109 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.075152 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.075173 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.075189 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.075199 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:15Z","lastTransitionTime":"2026-01-07T03:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.178329 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.178387 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.178404 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.178431 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.178450 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:15Z","lastTransitionTime":"2026-01-07T03:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.281549 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.281738 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.281758 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.281783 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.281801 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:15Z","lastTransitionTime":"2026-01-07T03:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.384370 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.384450 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.384473 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.384503 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.384527 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:15Z","lastTransitionTime":"2026-01-07T03:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.486705 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.486772 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.486794 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.486823 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.486844 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:15Z","lastTransitionTime":"2026-01-07T03:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.589887 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.589977 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.589998 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.590026 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.590044 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:15Z","lastTransitionTime":"2026-01-07T03:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.693217 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.693271 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.693290 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.693313 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.693329 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:15Z","lastTransitionTime":"2026-01-07T03:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.735243 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:15 crc kubenswrapper[4980]: E0107 03:33:15.735369 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.796335 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.796385 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.796415 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.796440 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.796459 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:15Z","lastTransitionTime":"2026-01-07T03:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.899176 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.899226 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.899244 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.899263 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:15 crc kubenswrapper[4980]: I0107 03:33:15.899278 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:15Z","lastTransitionTime":"2026-01-07T03:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.002314 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.002364 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.002380 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.002399 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.002417 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:16Z","lastTransitionTime":"2026-01-07T03:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.052849 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs\") pod \"network-metrics-daemon-j75z7\" (UID: \"1e3c7945-f3cb-4af2-8a0f-19b014123f74\") " pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:16 crc kubenswrapper[4980]: E0107 03:33:16.053064 4980 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 03:33:16 crc kubenswrapper[4980]: E0107 03:33:16.053143 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs podName:1e3c7945-f3cb-4af2-8a0f-19b014123f74 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:24.053122271 +0000 UTC m=+50.618817036 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs") pod "network-metrics-daemon-j75z7" (UID: "1e3c7945-f3cb-4af2-8a0f-19b014123f74") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.104771 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.104843 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.104862 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.104888 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.104908 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:16Z","lastTransitionTime":"2026-01-07T03:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.207264 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.207314 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.207332 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.207352 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.207368 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:16Z","lastTransitionTime":"2026-01-07T03:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.310143 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.310206 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.310224 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.310249 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.310266 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:16Z","lastTransitionTime":"2026-01-07T03:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.412375 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.412433 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.412450 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.412473 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.412491 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:16Z","lastTransitionTime":"2026-01-07T03:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.515844 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.515875 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.515886 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.515902 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.515913 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:16Z","lastTransitionTime":"2026-01-07T03:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.618116 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.618173 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.618190 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.618215 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.618236 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:16Z","lastTransitionTime":"2026-01-07T03:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.720362 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.720416 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.720430 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.720454 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.720468 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:16Z","lastTransitionTime":"2026-01-07T03:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.734971 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:16 crc kubenswrapper[4980]: E0107 03:33:16.738830 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.738865 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.738931 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:16 crc kubenswrapper[4980]: E0107 03:33:16.739001 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:16 crc kubenswrapper[4980]: E0107 03:33:16.739976 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.823077 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.823135 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.823153 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.823175 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.823192 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:16Z","lastTransitionTime":"2026-01-07T03:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.925247 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.925324 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.925342 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.925367 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:16 crc kubenswrapper[4980]: I0107 03:33:16.925385 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:16Z","lastTransitionTime":"2026-01-07T03:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.015893 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.015947 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.015964 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.015988 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.016004 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:17Z","lastTransitionTime":"2026-01-07T03:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:17 crc kubenswrapper[4980]: E0107 03:33:17.035671 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:17Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.041277 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.041316 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.041333 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.041352 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.041367 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:17Z","lastTransitionTime":"2026-01-07T03:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:17 crc kubenswrapper[4980]: E0107 03:33:17.062855 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:17Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.068736 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.068806 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.068832 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.068857 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.068875 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:17Z","lastTransitionTime":"2026-01-07T03:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:17 crc kubenswrapper[4980]: E0107 03:33:17.088887 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:17Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.093516 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.093605 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.093631 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.093661 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.093684 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:17Z","lastTransitionTime":"2026-01-07T03:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:17 crc kubenswrapper[4980]: E0107 03:33:17.112285 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:17Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.116154 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.116205 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.116223 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.116248 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.116267 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:17Z","lastTransitionTime":"2026-01-07T03:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:17 crc kubenswrapper[4980]: E0107 03:33:17.130970 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:17Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:17 crc kubenswrapper[4980]: E0107 03:33:17.131307 4980 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.133459 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.133517 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.133535 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.133588 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.133609 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:17Z","lastTransitionTime":"2026-01-07T03:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.236739 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.236795 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.236807 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.236828 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.236841 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:17Z","lastTransitionTime":"2026-01-07T03:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.339964 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.340022 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.340032 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.340062 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.340076 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:17Z","lastTransitionTime":"2026-01-07T03:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.443084 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.443153 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.443173 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.443197 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.443214 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:17Z","lastTransitionTime":"2026-01-07T03:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.545915 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.545951 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.545961 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.545977 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.545989 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:17Z","lastTransitionTime":"2026-01-07T03:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.648843 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.648880 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.648907 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.648920 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.648929 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:17Z","lastTransitionTime":"2026-01-07T03:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.734719 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:17 crc kubenswrapper[4980]: E0107 03:33:17.734929 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.751340 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.751404 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.751425 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.751448 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.751466 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:17Z","lastTransitionTime":"2026-01-07T03:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.854376 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.854490 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.854513 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.854533 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.854549 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:17Z","lastTransitionTime":"2026-01-07T03:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.957103 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.957295 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.957370 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.957397 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:17 crc kubenswrapper[4980]: I0107 03:33:17.957459 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:17Z","lastTransitionTime":"2026-01-07T03:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.060484 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.060537 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.060580 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.060602 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.060620 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:18Z","lastTransitionTime":"2026-01-07T03:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.163687 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.163747 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.163764 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.163788 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.163807 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:18Z","lastTransitionTime":"2026-01-07T03:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.266831 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.266884 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.266900 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.266923 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.266940 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:18Z","lastTransitionTime":"2026-01-07T03:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.369380 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.369442 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.369459 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.369483 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.369501 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:18Z","lastTransitionTime":"2026-01-07T03:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.472223 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.472281 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.472303 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.472329 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.472347 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:18Z","lastTransitionTime":"2026-01-07T03:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.575244 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.575330 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.575348 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.575371 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.575388 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:18Z","lastTransitionTime":"2026-01-07T03:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.678194 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.678253 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.678270 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.678297 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.678316 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:18Z","lastTransitionTime":"2026-01-07T03:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.735163 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.735164 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.735184 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:18 crc kubenswrapper[4980]: E0107 03:33:18.735434 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:18 crc kubenswrapper[4980]: E0107 03:33:18.735691 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:18 crc kubenswrapper[4980]: E0107 03:33:18.735836 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.781450 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.781515 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.781534 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.781591 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.781611 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:18Z","lastTransitionTime":"2026-01-07T03:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.884749 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.884813 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.884830 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.884857 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.884877 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:18Z","lastTransitionTime":"2026-01-07T03:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.987866 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.987928 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.987946 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.987972 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:18 crc kubenswrapper[4980]: I0107 03:33:18.987988 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:18Z","lastTransitionTime":"2026-01-07T03:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.090796 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.090859 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.091082 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.091111 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.091130 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:19Z","lastTransitionTime":"2026-01-07T03:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.194504 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.194627 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.194645 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.194668 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.194686 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:19Z","lastTransitionTime":"2026-01-07T03:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.297239 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.297284 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.297301 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.297321 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.297338 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:19Z","lastTransitionTime":"2026-01-07T03:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.400192 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.400251 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.400272 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.400298 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.400325 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:19Z","lastTransitionTime":"2026-01-07T03:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.503370 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.503447 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.503466 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.503496 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.503516 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:19Z","lastTransitionTime":"2026-01-07T03:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.606638 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.606734 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.606753 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.606781 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.606802 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:19Z","lastTransitionTime":"2026-01-07T03:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.710099 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.710190 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.710215 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.710250 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.710274 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:19Z","lastTransitionTime":"2026-01-07T03:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.734871 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:19 crc kubenswrapper[4980]: E0107 03:33:19.735234 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.813672 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.813731 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.813750 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.813774 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.813793 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:19Z","lastTransitionTime":"2026-01-07T03:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.916423 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.916519 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.916537 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.916598 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:19 crc kubenswrapper[4980]: I0107 03:33:19.916643 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:19Z","lastTransitionTime":"2026-01-07T03:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.019224 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.019301 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.019333 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.019370 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.019395 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:20Z","lastTransitionTime":"2026-01-07T03:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.123043 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.123093 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.123105 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.123123 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.123136 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:20Z","lastTransitionTime":"2026-01-07T03:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.225722 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.225798 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.225821 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.225852 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.225873 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:20Z","lastTransitionTime":"2026-01-07T03:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.329042 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.329095 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.329108 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.329128 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.329141 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:20Z","lastTransitionTime":"2026-01-07T03:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.432863 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.432934 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.432953 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.432990 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.433008 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:20Z","lastTransitionTime":"2026-01-07T03:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.537126 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.537194 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.537211 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.537239 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.537258 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:20Z","lastTransitionTime":"2026-01-07T03:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.640708 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.640775 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.640795 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.640825 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.640845 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:20Z","lastTransitionTime":"2026-01-07T03:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.735451 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.735614 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:20 crc kubenswrapper[4980]: E0107 03:33:20.735662 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:20 crc kubenswrapper[4980]: E0107 03:33:20.735864 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.735908 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:20 crc kubenswrapper[4980]: E0107 03:33:20.736121 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.744201 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.744279 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.744300 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.744328 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.744348 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:20Z","lastTransitionTime":"2026-01-07T03:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.847824 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.847891 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.847911 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.847938 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.847963 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:20Z","lastTransitionTime":"2026-01-07T03:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.950797 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.950874 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.950892 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.950919 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:20 crc kubenswrapper[4980]: I0107 03:33:20.950937 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:20Z","lastTransitionTime":"2026-01-07T03:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.054267 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.054326 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.054339 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.054377 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.054397 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:21Z","lastTransitionTime":"2026-01-07T03:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.158425 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.158497 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.158516 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.158547 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.158602 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:21Z","lastTransitionTime":"2026-01-07T03:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.262026 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.262086 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.262101 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.262126 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.262141 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:21Z","lastTransitionTime":"2026-01-07T03:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.365522 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.365597 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.365611 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.365632 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.365646 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:21Z","lastTransitionTime":"2026-01-07T03:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.468719 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.468778 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.468792 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.468814 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.468832 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:21Z","lastTransitionTime":"2026-01-07T03:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.572098 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.572143 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.572155 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.572174 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.572184 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:21Z","lastTransitionTime":"2026-01-07T03:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.675677 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.675746 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.675764 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.675840 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.675875 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:21Z","lastTransitionTime":"2026-01-07T03:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.735048 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:21 crc kubenswrapper[4980]: E0107 03:33:21.735257 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.778467 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.778532 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.778550 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.778614 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.778633 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:21Z","lastTransitionTime":"2026-01-07T03:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.881497 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.881640 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.881662 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.881690 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.881710 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:21Z","lastTransitionTime":"2026-01-07T03:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.984854 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.984939 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.984954 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.984980 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:21 crc kubenswrapper[4980]: I0107 03:33:21.984993 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:21Z","lastTransitionTime":"2026-01-07T03:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.088432 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.088504 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.088531 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.088593 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.088616 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:22Z","lastTransitionTime":"2026-01-07T03:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.191173 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.191239 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.191258 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.191289 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.191308 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:22Z","lastTransitionTime":"2026-01-07T03:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.294289 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.294354 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.294370 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.294393 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.294411 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:22Z","lastTransitionTime":"2026-01-07T03:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.397418 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.397478 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.397491 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.397514 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.397528 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:22Z","lastTransitionTime":"2026-01-07T03:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.499978 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.500040 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.500052 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.500094 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.500105 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:22Z","lastTransitionTime":"2026-01-07T03:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.604504 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.604588 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.604603 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.604623 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.604636 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:22Z","lastTransitionTime":"2026-01-07T03:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.707027 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.707080 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.707089 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.707104 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.707114 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:22Z","lastTransitionTime":"2026-01-07T03:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.735435 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:22 crc kubenswrapper[4980]: E0107 03:33:22.735805 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.735526 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:22 crc kubenswrapper[4980]: E0107 03:33:22.736081 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.735444 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:22 crc kubenswrapper[4980]: E0107 03:33:22.736350 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.809771 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.809834 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.809850 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.809878 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.809897 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:22Z","lastTransitionTime":"2026-01-07T03:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.917318 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.917399 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.917429 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.917480 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:22 crc kubenswrapper[4980]: I0107 03:33:22.917505 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:22Z","lastTransitionTime":"2026-01-07T03:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.020052 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.020095 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.020106 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.020122 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.020133 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:23Z","lastTransitionTime":"2026-01-07T03:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.122841 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.122893 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.122907 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.122924 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.122936 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:23Z","lastTransitionTime":"2026-01-07T03:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.226747 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.226807 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.226821 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.226840 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.226853 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:23Z","lastTransitionTime":"2026-01-07T03:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.330699 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.330765 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.330791 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.330823 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.330845 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:23Z","lastTransitionTime":"2026-01-07T03:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.434496 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.434596 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.434615 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.434641 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.434659 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:23Z","lastTransitionTime":"2026-01-07T03:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.538027 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.538074 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.538092 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.538113 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.538130 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:23Z","lastTransitionTime":"2026-01-07T03:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.641306 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.641369 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.641388 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.641417 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.641436 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:23Z","lastTransitionTime":"2026-01-07T03:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.735480 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:23 crc kubenswrapper[4980]: E0107 03:33:23.736178 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.736612 4980 scope.go:117] "RemoveContainer" containerID="df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.745651 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.745785 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.745847 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.745906 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.745965 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:23Z","lastTransitionTime":"2026-01-07T03:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.759246 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:23Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.778326 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:23Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.792103 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:23Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.805265 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:23Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.817473 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:23Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.838535 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:23Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.848569 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.848609 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.848622 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.848642 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.848654 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:23Z","lastTransitionTime":"2026-01-07T03:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.851031 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:23Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.863347 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:23Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.875861 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:23Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.886543 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:23Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.916939 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:23Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.934917 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:23Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.954339 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.954394 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.954412 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.954478 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.954498 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:23Z","lastTransitionTime":"2026-01-07T03:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.961898 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:05Z\\\",\\\"message\\\":\\\"userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:05.151793 6482 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0107 03:33:05.151816 6482 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0107 03:33:05.151850 6482 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0107 03:33:05.151869 6482 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0107 03:33:05.152641 6482 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0107 03:33:05.152732 6482 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0107 03:33:05.152738 6482 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0107 03:33:05.152772 6482 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0107 03:33:05.152824 6482 factory.go:656] Stopping watch factory\\\\nI0107 03:33:05.152839 6482 ovnkube.go:599] Stopped ovnkube\\\\nI0107 03:33:05.152868 6482 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0107 03:33:05.152884 6482 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0107 03:33:05.152892 6482 handler.go:208] Removed *v1.Node event handler 7\\\\nI0107 03:33:05.152900 6482 handler.go:208] Removed *v1.Node event handler 2\\\\nI0107 03:33:05.152924 6482 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0107 03:33:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:23Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:23 crc kubenswrapper[4980]: I0107 03:33:23.979667 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j75z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e3c7945-f3cb-4af2-8a0f-19b014123f74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j75z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:23Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.005284 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.026241 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.047014 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.055268 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs\") pod \"network-metrics-daemon-j75z7\" (UID: \"1e3c7945-f3cb-4af2-8a0f-19b014123f74\") " pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:24 crc kubenswrapper[4980]: E0107 03:33:24.055529 4980 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 03:33:24 crc kubenswrapper[4980]: E0107 03:33:24.055632 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs podName:1e3c7945-f3cb-4af2-8a0f-19b014123f74 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:40.055609334 +0000 UTC m=+66.621304089 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs") pod "network-metrics-daemon-j75z7" (UID: "1e3c7945-f3cb-4af2-8a0f-19b014123f74") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.057660 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.057701 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.057714 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.057736 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.057749 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:24Z","lastTransitionTime":"2026-01-07T03:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.101120 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovnkube-controller/1.log" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.105159 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerStarted","Data":"387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75"} Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.105737 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.126707 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j75z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e3c7945-f3cb-4af2-8a0f-19b014123f74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j75z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.159685 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.163177 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.163226 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.163238 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.163264 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.163279 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:24Z","lastTransitionTime":"2026-01-07T03:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.176327 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.197457 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:05Z\\\",\\\"message\\\":\\\"userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:05.151793 6482 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0107 03:33:05.151816 6482 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0107 03:33:05.151850 6482 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0107 03:33:05.151869 6482 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0107 03:33:05.152641 6482 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0107 03:33:05.152732 6482 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0107 03:33:05.152738 6482 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0107 03:33:05.152772 6482 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0107 03:33:05.152824 6482 factory.go:656] Stopping watch factory\\\\nI0107 03:33:05.152839 6482 ovnkube.go:599] Stopped ovnkube\\\\nI0107 03:33:05.152868 6482 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0107 03:33:05.152884 6482 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0107 03:33:05.152892 6482 handler.go:208] Removed *v1.Node event handler 7\\\\nI0107 03:33:05.152900 6482 handler.go:208] Removed *v1.Node event handler 2\\\\nI0107 03:33:05.152924 6482 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0107 03:33:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.218186 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.245622 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.266306 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.266355 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.266372 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.266392 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.266405 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:24Z","lastTransitionTime":"2026-01-07T03:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.272429 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.295397 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.319734 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.338117 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.352388 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.369577 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.369795 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.369857 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.369920 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.369987 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:24Z","lastTransitionTime":"2026-01-07T03:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.371791 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.398813 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.414903 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.430714 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.445808 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.462771 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.462811 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:24Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:24 crc kubenswrapper[4980]: E0107 03:33:24.463036 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:33:56.462993142 +0000 UTC m=+83.028687877 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.463168 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.463231 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.463280 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:24 crc kubenswrapper[4980]: E0107 03:33:24.463408 4980 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 03:33:24 crc kubenswrapper[4980]: E0107 03:33:24.463508 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 03:33:24 crc kubenswrapper[4980]: E0107 03:33:24.463535 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 03:33:24 crc kubenswrapper[4980]: E0107 03:33:24.463536 4980 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 03:33:24 crc kubenswrapper[4980]: E0107 03:33:24.463573 4980 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:33:24 crc kubenswrapper[4980]: E0107 03:33:24.463694 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:56.463520887 +0000 UTC m=+83.029215622 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 03:33:24 crc kubenswrapper[4980]: E0107 03:33:24.463771 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:56.463762895 +0000 UTC m=+83.029457630 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:33:24 crc kubenswrapper[4980]: E0107 03:33:24.463853 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:56.463843797 +0000 UTC m=+83.029538532 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.472392 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.472429 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.472438 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.472459 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.472474 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:24Z","lastTransitionTime":"2026-01-07T03:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.564661 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:24 crc kubenswrapper[4980]: E0107 03:33:24.564880 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 03:33:24 crc kubenswrapper[4980]: E0107 03:33:24.564904 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 03:33:24 crc kubenswrapper[4980]: E0107 03:33:24.564919 4980 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:33:24 crc kubenswrapper[4980]: E0107 03:33:24.564989 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-07 03:33:56.5649679 +0000 UTC m=+83.130662625 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.575708 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.575758 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.575773 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.575796 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.575816 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:24Z","lastTransitionTime":"2026-01-07T03:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.678848 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.678901 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.678911 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.678928 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.679279 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:24Z","lastTransitionTime":"2026-01-07T03:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.734696 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.734851 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:24 crc kubenswrapper[4980]: E0107 03:33:24.735002 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.735086 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:24 crc kubenswrapper[4980]: E0107 03:33:24.735108 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:24 crc kubenswrapper[4980]: E0107 03:33:24.735345 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.789346 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.789516 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.789548 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.789641 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.789683 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:24Z","lastTransitionTime":"2026-01-07T03:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.893462 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.893506 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.893516 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.893535 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.893547 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:24Z","lastTransitionTime":"2026-01-07T03:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.996882 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.996957 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.996970 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.996995 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:24 crc kubenswrapper[4980]: I0107 03:33:24.997010 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:24Z","lastTransitionTime":"2026-01-07T03:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.101591 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.101653 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.101669 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.101692 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.101707 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:25Z","lastTransitionTime":"2026-01-07T03:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.112237 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovnkube-controller/2.log" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.112902 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovnkube-controller/1.log" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.116546 4980 generic.go:334] "Generic (PLEG): container finished" podID="6c962a95-c8ed-4d65-810e-1da967416c06" containerID="387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75" exitCode=1 Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.116607 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerDied","Data":"387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75"} Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.116658 4980 scope.go:117] "RemoveContainer" containerID="df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.118079 4980 scope.go:117] "RemoveContainer" containerID="387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75" Jan 07 03:33:25 crc kubenswrapper[4980]: E0107 03:33:25.118437 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.144360 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:25Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.162295 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:25Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.177642 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:25Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.192290 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:25Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.205179 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.205229 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.205247 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.205273 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.205293 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:25Z","lastTransitionTime":"2026-01-07T03:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.213171 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:25Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.225723 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:25Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.258623 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:25Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.276115 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:25Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.307818 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.307886 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.307906 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.307935 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.307955 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:25Z","lastTransitionTime":"2026-01-07T03:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.312711 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df56792fee2d1bf4a3e00b91d9f3f5f97a95a77e3768db2b655e9a95da8e358d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:05Z\\\",\\\"message\\\":\\\"userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0107 03:33:05.151793 6482 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0107 03:33:05.151816 6482 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0107 03:33:05.151850 6482 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0107 03:33:05.151869 6482 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0107 03:33:05.152641 6482 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0107 03:33:05.152732 6482 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0107 03:33:05.152738 6482 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0107 03:33:05.152772 6482 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0107 03:33:05.152824 6482 factory.go:656] Stopping watch factory\\\\nI0107 03:33:05.152839 6482 ovnkube.go:599] Stopped ovnkube\\\\nI0107 03:33:05.152868 6482 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0107 03:33:05.152884 6482 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0107 03:33:05.152892 6482 handler.go:208] Removed *v1.Node event handler 7\\\\nI0107 03:33:05.152900 6482 handler.go:208] Removed *v1.Node event handler 2\\\\nI0107 03:33:05.152924 6482 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0107 03:33:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:24Z\\\",\\\"message\\\":\\\"Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0107 03:33:24.880795 6701 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:25Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.329385 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j75z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e3c7945-f3cb-4af2-8a0f-19b014123f74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j75z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:25Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.352713 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:25Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.374298 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:25Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.394056 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:25Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.410725 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.410802 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.410823 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.410855 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.410872 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:25Z","lastTransitionTime":"2026-01-07T03:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.416130 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:25Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.438846 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:25Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.455114 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:25Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.474975 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:25Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.513750 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.513836 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.513859 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.513901 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.513928 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:25Z","lastTransitionTime":"2026-01-07T03:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.617208 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.617275 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.617298 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.617328 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.617348 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:25Z","lastTransitionTime":"2026-01-07T03:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.720718 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.720789 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.720811 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.720840 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.720864 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:25Z","lastTransitionTime":"2026-01-07T03:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.735234 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:25 crc kubenswrapper[4980]: E0107 03:33:25.735459 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.823580 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.823646 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.823666 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.823692 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.823716 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:25Z","lastTransitionTime":"2026-01-07T03:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.926732 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.926807 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.926828 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.926858 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:25 crc kubenswrapper[4980]: I0107 03:33:25.926877 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:25Z","lastTransitionTime":"2026-01-07T03:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.029718 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.029804 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.029827 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.029858 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.029880 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:26Z","lastTransitionTime":"2026-01-07T03:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.121404 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovnkube-controller/2.log" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.125818 4980 scope.go:117] "RemoveContainer" containerID="387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75" Jan 07 03:33:26 crc kubenswrapper[4980]: E0107 03:33:26.126112 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.136016 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.136089 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.136109 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.136145 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.136169 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:26Z","lastTransitionTime":"2026-01-07T03:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.167514 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.186199 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.216856 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:24Z\\\",\\\"message\\\":\\\"Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0107 03:33:24.880795 6701 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.231635 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j75z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e3c7945-f3cb-4af2-8a0f-19b014123f74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j75z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.240523 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.240774 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.240926 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.241064 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.241193 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:26Z","lastTransitionTime":"2026-01-07T03:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.252847 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.274521 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.294523 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.317653 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.343264 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.344598 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.344686 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.344709 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.345942 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.346076 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:26Z","lastTransitionTime":"2026-01-07T03:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.360529 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.379305 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.400408 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.419104 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.440755 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.449749 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.449896 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.449920 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.449948 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.449972 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:26Z","lastTransitionTime":"2026-01-07T03:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.462880 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.485120 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.501298 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.554084 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.554157 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.554177 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.554207 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.554232 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:26Z","lastTransitionTime":"2026-01-07T03:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.658023 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.658104 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.658122 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.658158 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.658180 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:26Z","lastTransitionTime":"2026-01-07T03:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.696159 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.711334 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.720317 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.735350 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.735481 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.735376 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:26 crc kubenswrapper[4980]: E0107 03:33:26.735644 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:26 crc kubenswrapper[4980]: E0107 03:33:26.735775 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:26 crc kubenswrapper[4980]: E0107 03:33:26.735950 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.742427 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.759752 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.762192 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.762259 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.762284 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.762322 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.762342 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:26Z","lastTransitionTime":"2026-01-07T03:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.783457 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.809799 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.828398 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.848330 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.863981 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.869056 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.869120 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.869142 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.869170 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.869188 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:26Z","lastTransitionTime":"2026-01-07T03:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.879323 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.903204 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.926886 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.944380 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.957383 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.971858 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.971902 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.971943 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.971965 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.971980 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:26Z","lastTransitionTime":"2026-01-07T03:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.981922 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:26 crc kubenswrapper[4980]: I0107 03:33:26.996009 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:26Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.027662 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:24Z\\\",\\\"message\\\":\\\"Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0107 03:33:24.880795 6701 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:27Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.046850 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j75z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e3c7945-f3cb-4af2-8a0f-19b014123f74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j75z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:27Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.074688 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.074726 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.074737 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.074755 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.074768 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:27Z","lastTransitionTime":"2026-01-07T03:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.176959 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.177013 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.177026 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.177046 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.177061 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:27Z","lastTransitionTime":"2026-01-07T03:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.280240 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.280317 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.280335 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.280367 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.280419 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:27Z","lastTransitionTime":"2026-01-07T03:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.384435 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.384514 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.384534 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.384631 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.384660 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:27Z","lastTransitionTime":"2026-01-07T03:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.482599 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.482674 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.482692 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.482721 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.482739 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:27Z","lastTransitionTime":"2026-01-07T03:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:27 crc kubenswrapper[4980]: E0107 03:33:27.504409 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:27Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.510949 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.511032 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.511055 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.511084 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.511105 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:27Z","lastTransitionTime":"2026-01-07T03:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:27 crc kubenswrapper[4980]: E0107 03:33:27.532941 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:27Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.538805 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.538877 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.538898 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.538926 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.538947 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:27Z","lastTransitionTime":"2026-01-07T03:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:27 crc kubenswrapper[4980]: E0107 03:33:27.560986 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:27Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.565711 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.565783 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.565802 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.565829 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.565867 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:27Z","lastTransitionTime":"2026-01-07T03:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:27 crc kubenswrapper[4980]: E0107 03:33:27.588051 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:27Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.593664 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.593709 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.593729 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.593753 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.593771 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:27Z","lastTransitionTime":"2026-01-07T03:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:27 crc kubenswrapper[4980]: E0107 03:33:27.615266 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:27Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:27 crc kubenswrapper[4980]: E0107 03:33:27.615503 4980 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.617298 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.617356 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.617374 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.617394 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.617411 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:27Z","lastTransitionTime":"2026-01-07T03:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.720518 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.720608 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.720628 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.720651 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.720670 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:27Z","lastTransitionTime":"2026-01-07T03:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.735323 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:27 crc kubenswrapper[4980]: E0107 03:33:27.735586 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.823885 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.823943 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.823962 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.823984 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.824004 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:27Z","lastTransitionTime":"2026-01-07T03:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.927230 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.927310 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.927329 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.927360 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:27 crc kubenswrapper[4980]: I0107 03:33:27.927380 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:27Z","lastTransitionTime":"2026-01-07T03:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.030743 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.030813 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.030834 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.030865 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.030887 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:28Z","lastTransitionTime":"2026-01-07T03:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.132733 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.132802 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.132820 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.132916 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.132937 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:28Z","lastTransitionTime":"2026-01-07T03:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.236840 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.236933 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.236950 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.236975 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.236995 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:28Z","lastTransitionTime":"2026-01-07T03:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.340190 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.340235 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.340253 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.340274 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.340292 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:28Z","lastTransitionTime":"2026-01-07T03:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.443790 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.443868 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.443886 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.443914 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.443936 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:28Z","lastTransitionTime":"2026-01-07T03:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.546644 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.546719 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.546737 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.546768 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.546788 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:28Z","lastTransitionTime":"2026-01-07T03:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.649630 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.649697 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.649719 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.649745 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.649763 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:28Z","lastTransitionTime":"2026-01-07T03:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.734738 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.734886 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:28 crc kubenswrapper[4980]: E0107 03:33:28.735027 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.735112 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:28 crc kubenswrapper[4980]: E0107 03:33:28.735237 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:28 crc kubenswrapper[4980]: E0107 03:33:28.735415 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.753524 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.753660 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.753683 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.753710 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.753728 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:28Z","lastTransitionTime":"2026-01-07T03:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.858527 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.858622 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.858640 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.858702 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.858721 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:28Z","lastTransitionTime":"2026-01-07T03:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.962260 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.962317 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.962338 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.962366 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:28 crc kubenswrapper[4980]: I0107 03:33:28.962384 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:28Z","lastTransitionTime":"2026-01-07T03:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.065601 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.065654 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.065672 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.065698 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.065717 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:29Z","lastTransitionTime":"2026-01-07T03:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.168047 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.168106 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.168124 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.168150 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.168171 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:29Z","lastTransitionTime":"2026-01-07T03:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.271745 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.271808 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.271826 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.271852 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.271873 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:29Z","lastTransitionTime":"2026-01-07T03:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.374897 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.374958 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.374975 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.375004 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.375020 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:29Z","lastTransitionTime":"2026-01-07T03:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.478131 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.478200 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.478219 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.478250 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.478279 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:29Z","lastTransitionTime":"2026-01-07T03:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.581370 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.581468 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.581525 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.581586 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.581607 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:29Z","lastTransitionTime":"2026-01-07T03:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.685079 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.685142 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.685161 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.685195 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.685213 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:29Z","lastTransitionTime":"2026-01-07T03:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.735099 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:29 crc kubenswrapper[4980]: E0107 03:33:29.735286 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.804984 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.805049 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.805066 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.805093 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.805111 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:29Z","lastTransitionTime":"2026-01-07T03:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.908393 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.908900 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.909104 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.909306 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:29 crc kubenswrapper[4980]: I0107 03:33:29.909504 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:29Z","lastTransitionTime":"2026-01-07T03:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.013972 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.014035 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.014052 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.014080 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.014098 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:30Z","lastTransitionTime":"2026-01-07T03:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.117495 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.117646 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.117669 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.117692 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.117709 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:30Z","lastTransitionTime":"2026-01-07T03:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.220711 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.220769 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.220786 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.220812 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.220830 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:30Z","lastTransitionTime":"2026-01-07T03:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.324547 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.324657 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.324675 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.324701 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.324718 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:30Z","lastTransitionTime":"2026-01-07T03:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.427799 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.427883 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.427905 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.427938 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.427963 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:30Z","lastTransitionTime":"2026-01-07T03:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.531255 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.531333 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.531352 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.531378 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.531395 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:30Z","lastTransitionTime":"2026-01-07T03:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.635321 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.635404 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.635424 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.635461 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.635487 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:30Z","lastTransitionTime":"2026-01-07T03:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.734731 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.734843 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:30 crc kubenswrapper[4980]: E0107 03:33:30.734882 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.734957 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:30 crc kubenswrapper[4980]: E0107 03:33:30.735264 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:30 crc kubenswrapper[4980]: E0107 03:33:30.735334 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.738949 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.739006 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.739022 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.739052 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.739070 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:30Z","lastTransitionTime":"2026-01-07T03:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.842009 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.842077 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.842097 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.842126 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.842146 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:30Z","lastTransitionTime":"2026-01-07T03:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.945425 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.945492 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.945514 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.945542 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:30 crc kubenswrapper[4980]: I0107 03:33:30.945594 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:30Z","lastTransitionTime":"2026-01-07T03:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.049048 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.049140 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.049162 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.049196 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.049219 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:31Z","lastTransitionTime":"2026-01-07T03:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.151921 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.151978 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.151995 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.152020 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.152038 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:31Z","lastTransitionTime":"2026-01-07T03:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.254811 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.254868 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.254880 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.254903 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.254915 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:31Z","lastTransitionTime":"2026-01-07T03:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.358648 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.358700 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.358712 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.358730 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.358742 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:31Z","lastTransitionTime":"2026-01-07T03:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.462517 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.462637 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.462660 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.462691 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.462721 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:31Z","lastTransitionTime":"2026-01-07T03:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.565726 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.565783 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.565794 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.565812 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.565825 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:31Z","lastTransitionTime":"2026-01-07T03:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.669439 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.669529 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.669583 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.669619 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.669638 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:31Z","lastTransitionTime":"2026-01-07T03:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.735146 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:31 crc kubenswrapper[4980]: E0107 03:33:31.735462 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.772181 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.772224 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.772235 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.772251 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.772263 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:31Z","lastTransitionTime":"2026-01-07T03:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.875428 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.875495 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.875514 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.875539 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.875584 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:31Z","lastTransitionTime":"2026-01-07T03:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.978362 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.978448 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.978463 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.978487 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:31 crc kubenswrapper[4980]: I0107 03:33:31.978503 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:31Z","lastTransitionTime":"2026-01-07T03:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.081382 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.081450 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.081467 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.081493 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.081510 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:32Z","lastTransitionTime":"2026-01-07T03:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.183858 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.183923 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.183940 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.183965 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.183982 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:32Z","lastTransitionTime":"2026-01-07T03:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.286346 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.286416 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.286434 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.286458 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.286478 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:32Z","lastTransitionTime":"2026-01-07T03:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.398354 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.398383 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.398391 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.398403 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.398411 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:32Z","lastTransitionTime":"2026-01-07T03:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.501547 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.501615 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.501626 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.501643 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.501655 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:32Z","lastTransitionTime":"2026-01-07T03:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.604481 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.604548 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.604595 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.604621 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.604638 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:32Z","lastTransitionTime":"2026-01-07T03:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.706944 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.707002 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.707020 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.707042 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.707058 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:32Z","lastTransitionTime":"2026-01-07T03:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.735405 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.735478 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:32 crc kubenswrapper[4980]: E0107 03:33:32.735576 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.735697 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:32 crc kubenswrapper[4980]: E0107 03:33:32.735774 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:32 crc kubenswrapper[4980]: E0107 03:33:32.735885 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.809162 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.809214 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.809231 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.809251 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.809267 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:32Z","lastTransitionTime":"2026-01-07T03:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.911851 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.911904 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.911921 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.911942 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:32 crc kubenswrapper[4980]: I0107 03:33:32.911959 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:32Z","lastTransitionTime":"2026-01-07T03:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.013845 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.013870 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.013878 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.013892 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.013900 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:33Z","lastTransitionTime":"2026-01-07T03:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.116113 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.116134 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.116142 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.116151 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.116158 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:33Z","lastTransitionTime":"2026-01-07T03:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.217715 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.217737 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.217744 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.217754 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.217763 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:33Z","lastTransitionTime":"2026-01-07T03:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.319374 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.319423 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.319440 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.319460 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.319474 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:33Z","lastTransitionTime":"2026-01-07T03:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.422214 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.422276 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.422293 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.422319 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.422337 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:33Z","lastTransitionTime":"2026-01-07T03:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.524789 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.524848 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.524865 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.524889 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.524906 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:33Z","lastTransitionTime":"2026-01-07T03:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.639088 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.639131 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.639141 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.639155 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.639164 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:33Z","lastTransitionTime":"2026-01-07T03:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.734678 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:33 crc kubenswrapper[4980]: E0107 03:33:33.734833 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.742079 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.742127 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.742144 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.742168 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.742185 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:33Z","lastTransitionTime":"2026-01-07T03:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.752992 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:33Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.766790 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:33Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.781747 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:33Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.801940 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:33Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.822774 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:33Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.840348 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:33Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.843925 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.843965 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.843978 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.843993 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.844360 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:33Z","lastTransitionTime":"2026-01-07T03:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.856533 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:33Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.870430 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:33Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.884661 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:33Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.902744 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:33Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.920779 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:33Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.938269 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:33Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.946931 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.946971 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.946986 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.947006 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.947022 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:33Z","lastTransitionTime":"2026-01-07T03:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.956328 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:33Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:33 crc kubenswrapper[4980]: I0107 03:33:33.972204 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j75z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e3c7945-f3cb-4af2-8a0f-19b014123f74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j75z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:33Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.003636 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:34Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.021930 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a258cfd1-39e0-429e-9e0a-9b194eb37cc4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2a6fd6463030fc325a1d02660ee60568d07812763b17f0afff926c273155620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9146f4a2fc7846d5688c54a7c7508ef7a9b0982306cf09705764fcc74e3f9597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6a45bda4fadd983cfdd44b4bd0d30119c61a359a83e8380e97ae53ef2b4f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:34Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.039424 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:34Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.050374 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.050483 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.050503 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.050603 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.050624 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:34Z","lastTransitionTime":"2026-01-07T03:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.071238 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:24Z\\\",\\\"message\\\":\\\"Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0107 03:33:24.880795 6701 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:34Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.159366 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.159421 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.159432 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.159452 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.159462 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:34Z","lastTransitionTime":"2026-01-07T03:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.261728 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.261807 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.261833 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.261865 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.261888 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:34Z","lastTransitionTime":"2026-01-07T03:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.365468 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.365503 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.365512 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.365528 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.365537 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:34Z","lastTransitionTime":"2026-01-07T03:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.469201 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.469261 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.469282 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.469309 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.469329 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:34Z","lastTransitionTime":"2026-01-07T03:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.571629 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.571693 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.571712 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.571744 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.571767 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:34Z","lastTransitionTime":"2026-01-07T03:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.674831 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.674889 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.674906 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.674929 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.674947 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:34Z","lastTransitionTime":"2026-01-07T03:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.735069 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.735181 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.735248 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:34 crc kubenswrapper[4980]: E0107 03:33:34.735457 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:34 crc kubenswrapper[4980]: E0107 03:33:34.735673 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:34 crc kubenswrapper[4980]: E0107 03:33:34.735794 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.777761 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.777912 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.777936 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.777960 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.778018 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:34Z","lastTransitionTime":"2026-01-07T03:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.881181 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.881211 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.881250 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.881265 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.881277 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:34Z","lastTransitionTime":"2026-01-07T03:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.985206 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.985262 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.985279 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.985306 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:34 crc kubenswrapper[4980]: I0107 03:33:34.985354 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:34Z","lastTransitionTime":"2026-01-07T03:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.088041 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.088089 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.088098 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.088115 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.088124 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:35Z","lastTransitionTime":"2026-01-07T03:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.190528 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.190609 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.190627 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.190651 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.190667 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:35Z","lastTransitionTime":"2026-01-07T03:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.293539 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.293607 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.293619 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.293636 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.293646 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:35Z","lastTransitionTime":"2026-01-07T03:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.398155 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.398204 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.398221 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.398242 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.398259 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:35Z","lastTransitionTime":"2026-01-07T03:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.501512 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.501635 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.501665 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.501691 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.501708 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:35Z","lastTransitionTime":"2026-01-07T03:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.604606 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.604664 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.604681 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.604706 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.604723 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:35Z","lastTransitionTime":"2026-01-07T03:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.707306 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.707370 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.707388 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.707413 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.707430 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:35Z","lastTransitionTime":"2026-01-07T03:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.735152 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:35 crc kubenswrapper[4980]: E0107 03:33:35.735376 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.809653 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.809714 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.809732 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.809755 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.809772 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:35Z","lastTransitionTime":"2026-01-07T03:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.913178 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.913226 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.913237 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.913257 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:35 crc kubenswrapper[4980]: I0107 03:33:35.913271 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:35Z","lastTransitionTime":"2026-01-07T03:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.016499 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.016598 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.016618 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.016642 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.016661 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:36Z","lastTransitionTime":"2026-01-07T03:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.119999 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.120060 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.120077 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.120102 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.120121 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:36Z","lastTransitionTime":"2026-01-07T03:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.222854 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.222895 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.222906 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.222923 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.222934 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:36Z","lastTransitionTime":"2026-01-07T03:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.326163 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.326211 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.326228 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.326252 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.326270 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:36Z","lastTransitionTime":"2026-01-07T03:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.429329 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.429397 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.429415 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.429440 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.429456 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:36Z","lastTransitionTime":"2026-01-07T03:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.531967 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.532007 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.532020 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.532039 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.532051 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:36Z","lastTransitionTime":"2026-01-07T03:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.634115 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.634179 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.634196 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.634220 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.634238 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:36Z","lastTransitionTime":"2026-01-07T03:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.735372 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.735468 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:36 crc kubenswrapper[4980]: E0107 03:33:36.735539 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.735484 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:36 crc kubenswrapper[4980]: E0107 03:33:36.735684 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:36 crc kubenswrapper[4980]: E0107 03:33:36.735812 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.737162 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.737204 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.737221 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.737241 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.737258 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:36Z","lastTransitionTime":"2026-01-07T03:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.840258 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.840301 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.840312 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.840329 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.840339 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:36Z","lastTransitionTime":"2026-01-07T03:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.943070 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.943143 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.943162 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.943188 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:36 crc kubenswrapper[4980]: I0107 03:33:36.943206 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:36Z","lastTransitionTime":"2026-01-07T03:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.045830 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.045888 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.045903 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.045923 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.045939 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:37Z","lastTransitionTime":"2026-01-07T03:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.148587 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.148635 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.148650 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.148671 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.148688 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:37Z","lastTransitionTime":"2026-01-07T03:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.250743 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.250781 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.250790 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.250803 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.250812 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:37Z","lastTransitionTime":"2026-01-07T03:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.354025 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.354082 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.354099 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.354126 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.354143 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:37Z","lastTransitionTime":"2026-01-07T03:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.457914 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.457989 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.458008 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.458037 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.458056 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:37Z","lastTransitionTime":"2026-01-07T03:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.561215 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.561278 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.561295 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.561323 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.561343 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:37Z","lastTransitionTime":"2026-01-07T03:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.663684 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.663747 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.663763 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.663790 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.663808 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:37Z","lastTransitionTime":"2026-01-07T03:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.735291 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:37 crc kubenswrapper[4980]: E0107 03:33:37.735475 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.765841 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.765906 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.765923 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.765946 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.765965 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:37Z","lastTransitionTime":"2026-01-07T03:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.869526 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.869648 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.869671 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.869703 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.869730 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:37Z","lastTransitionTime":"2026-01-07T03:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.962406 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.962515 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.962537 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.962595 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.962615 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:37Z","lastTransitionTime":"2026-01-07T03:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:37 crc kubenswrapper[4980]: E0107 03:33:37.978527 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:37Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.983494 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.983604 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.983634 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.983666 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:37 crc kubenswrapper[4980]: I0107 03:33:37.983690 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:37Z","lastTransitionTime":"2026-01-07T03:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:38 crc kubenswrapper[4980]: E0107 03:33:38.004388 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:38Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.008992 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.009042 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.009069 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.009099 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.009122 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:38Z","lastTransitionTime":"2026-01-07T03:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:38 crc kubenswrapper[4980]: E0107 03:33:38.027603 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:38Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.032004 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.032065 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.032080 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.032100 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.032115 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:38Z","lastTransitionTime":"2026-01-07T03:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:38 crc kubenswrapper[4980]: E0107 03:33:38.049779 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:38Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.055100 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.055162 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.055176 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.055201 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.055216 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:38Z","lastTransitionTime":"2026-01-07T03:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:38 crc kubenswrapper[4980]: E0107 03:33:38.080688 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:38Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:38 crc kubenswrapper[4980]: E0107 03:33:38.080811 4980 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.083272 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.083307 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.083318 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.083335 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.083346 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:38Z","lastTransitionTime":"2026-01-07T03:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.187431 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.187492 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.187509 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.187538 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.187583 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:38Z","lastTransitionTime":"2026-01-07T03:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.290506 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.290548 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.290590 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.290608 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.290621 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:38Z","lastTransitionTime":"2026-01-07T03:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.393810 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.393854 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.393863 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.393878 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.393887 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:38Z","lastTransitionTime":"2026-01-07T03:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.496208 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.496630 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.496796 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.496953 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.497084 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:38Z","lastTransitionTime":"2026-01-07T03:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.600135 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.600332 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.600356 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.600383 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.600400 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:38Z","lastTransitionTime":"2026-01-07T03:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.702880 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.702947 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.702964 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.702991 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.703013 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:38Z","lastTransitionTime":"2026-01-07T03:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.734788 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.734884 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.734893 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:38 crc kubenswrapper[4980]: E0107 03:33:38.734959 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:38 crc kubenswrapper[4980]: E0107 03:33:38.735043 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:38 crc kubenswrapper[4980]: E0107 03:33:38.735259 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.806115 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.806204 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.806231 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.806334 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.806448 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:38Z","lastTransitionTime":"2026-01-07T03:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.909899 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.909981 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.910006 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.910403 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:38 crc kubenswrapper[4980]: I0107 03:33:38.910958 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:38Z","lastTransitionTime":"2026-01-07T03:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.013637 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.013685 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.013698 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.013714 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.013726 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:39Z","lastTransitionTime":"2026-01-07T03:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.116774 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.116818 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.116828 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.116844 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.116854 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:39Z","lastTransitionTime":"2026-01-07T03:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.219836 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.219880 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.219889 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.219905 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.219915 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:39Z","lastTransitionTime":"2026-01-07T03:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.322657 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.322721 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.322730 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.322745 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.322758 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:39Z","lastTransitionTime":"2026-01-07T03:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.425769 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.425839 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.425857 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.425883 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.425902 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:39Z","lastTransitionTime":"2026-01-07T03:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.528193 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.528237 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.528245 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.528260 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.528271 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:39Z","lastTransitionTime":"2026-01-07T03:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.631278 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.631339 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.631356 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.631381 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.631398 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:39Z","lastTransitionTime":"2026-01-07T03:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.734538 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.734590 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.734595 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.734715 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.734729 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.734738 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:39Z","lastTransitionTime":"2026-01-07T03:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:39 crc kubenswrapper[4980]: E0107 03:33:39.734695 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.735389 4980 scope.go:117] "RemoveContainer" containerID="387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75" Jan 07 03:33:39 crc kubenswrapper[4980]: E0107 03:33:39.735734 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.837908 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.837969 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.837988 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.838012 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.838031 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:39Z","lastTransitionTime":"2026-01-07T03:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.971456 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.971606 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.971634 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.971663 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:39 crc kubenswrapper[4980]: I0107 03:33:39.971680 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:39Z","lastTransitionTime":"2026-01-07T03:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.071982 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs\") pod \"network-metrics-daemon-j75z7\" (UID: \"1e3c7945-f3cb-4af2-8a0f-19b014123f74\") " pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:40 crc kubenswrapper[4980]: E0107 03:33:40.072196 4980 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 03:33:40 crc kubenswrapper[4980]: E0107 03:33:40.072292 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs podName:1e3c7945-f3cb-4af2-8a0f-19b014123f74 nodeName:}" failed. No retries permitted until 2026-01-07 03:34:12.07227136 +0000 UTC m=+98.637966115 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs") pod "network-metrics-daemon-j75z7" (UID: "1e3c7945-f3cb-4af2-8a0f-19b014123f74") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.074614 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.074682 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.074696 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.074717 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.074729 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:40Z","lastTransitionTime":"2026-01-07T03:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.177764 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.177831 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.177851 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.177882 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.177905 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:40Z","lastTransitionTime":"2026-01-07T03:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.211395 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9ct5r_3b3e552e-9608-4577-86c3-5f7573ef22f6/kube-multus/0.log" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.211470 4980 generic.go:334] "Generic (PLEG): container finished" podID="3b3e552e-9608-4577-86c3-5f7573ef22f6" containerID="5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024" exitCode=1 Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.211525 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9ct5r" event={"ID":"3b3e552e-9608-4577-86c3-5f7573ef22f6","Type":"ContainerDied","Data":"5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024"} Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.212381 4980 scope.go:117] "RemoveContainer" containerID="5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.230869 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:40Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.251094 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:40Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.264154 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:40Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.280519 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:40Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.284255 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.284285 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.284297 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.284315 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.284326 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:40Z","lastTransitionTime":"2026-01-07T03:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.299232 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:40Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.321569 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:40Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.337415 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:40Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.352248 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:40Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.364628 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:40Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.375495 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:40Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.386632 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.386672 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.386683 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.386701 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.386713 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:40Z","lastTransitionTime":"2026-01-07T03:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.407317 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:40Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.423650 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a258cfd1-39e0-429e-9e0a-9b194eb37cc4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2a6fd6463030fc325a1d02660ee60568d07812763b17f0afff926c273155620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9146f4a2fc7846d5688c54a7c7508ef7a9b0982306cf09705764fcc74e3f9597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6a45bda4fadd983cfdd44b4bd0d30119c61a359a83e8380e97ae53ef2b4f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:40Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.441366 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:40Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.471250 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:24Z\\\",\\\"message\\\":\\\"Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0107 03:33:24.880795 6701 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:40Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.489345 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.489374 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.489382 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.489396 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.489406 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:40Z","lastTransitionTime":"2026-01-07T03:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.491268 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j75z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e3c7945-f3cb-4af2-8a0f-19b014123f74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j75z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:40Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.508479 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:40Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.524415 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:40Z\\\",\\\"message\\\":\\\"2026-01-07T03:32:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a7eec747-5882-4140-a693-ee9f6ac1a16d\\\\n2026-01-07T03:32:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a7eec747-5882-4140-a693-ee9f6ac1a16d to /host/opt/cni/bin/\\\\n2026-01-07T03:32:55Z [verbose] multus-daemon started\\\\n2026-01-07T03:32:55Z [verbose] Readiness Indicator file check\\\\n2026-01-07T03:33:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:40Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.540744 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:40Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.592146 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.592202 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.592219 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.592243 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.592260 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:40Z","lastTransitionTime":"2026-01-07T03:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.694476 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.694510 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.694518 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.694530 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.694540 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:40Z","lastTransitionTime":"2026-01-07T03:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.735123 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.735234 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.735123 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:40 crc kubenswrapper[4980]: E0107 03:33:40.735500 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:40 crc kubenswrapper[4980]: E0107 03:33:40.735667 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:40 crc kubenswrapper[4980]: E0107 03:33:40.735820 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.798060 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.798126 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.798143 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.798170 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.798188 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:40Z","lastTransitionTime":"2026-01-07T03:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.900283 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.900317 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.900325 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.900340 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:40 crc kubenswrapper[4980]: I0107 03:33:40.900350 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:40Z","lastTransitionTime":"2026-01-07T03:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.003434 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.003796 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.003980 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.004120 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.004244 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:41Z","lastTransitionTime":"2026-01-07T03:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.107268 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.107320 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.107331 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.107349 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.107359 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:41Z","lastTransitionTime":"2026-01-07T03:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.231966 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.231994 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.232003 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.232016 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.232024 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:41Z","lastTransitionTime":"2026-01-07T03:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.235710 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9ct5r_3b3e552e-9608-4577-86c3-5f7573ef22f6/kube-multus/0.log" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.235767 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9ct5r" event={"ID":"3b3e552e-9608-4577-86c3-5f7573ef22f6","Type":"ContainerStarted","Data":"16c5f8b24cd9dff6094a430c5eb226259115e22ab47ce38eeabdc1d675a5c643"} Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.251313 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:41Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.280090 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:24Z\\\",\\\"message\\\":\\\"Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0107 03:33:24.880795 6701 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:41Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.295224 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j75z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e3c7945-f3cb-4af2-8a0f-19b014123f74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j75z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:41Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.316633 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:41Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.332658 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a258cfd1-39e0-429e-9e0a-9b194eb37cc4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2a6fd6463030fc325a1d02660ee60568d07812763b17f0afff926c273155620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9146f4a2fc7846d5688c54a7c7508ef7a9b0982306cf09705764fcc74e3f9597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6a45bda4fadd983cfdd44b4bd0d30119c61a359a83e8380e97ae53ef2b4f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:41Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.334415 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.334444 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.334456 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.334468 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.334477 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:41Z","lastTransitionTime":"2026-01-07T03:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.345334 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:41Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.364356 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c5f8b24cd9dff6094a430c5eb226259115e22ab47ce38eeabdc1d675a5c643\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:40Z\\\",\\\"message\\\":\\\"2026-01-07T03:32:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a7eec747-5882-4140-a693-ee9f6ac1a16d\\\\n2026-01-07T03:32:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a7eec747-5882-4140-a693-ee9f6ac1a16d to /host/opt/cni/bin/\\\\n2026-01-07T03:32:55Z [verbose] multus-daemon started\\\\n2026-01-07T03:32:55Z [verbose] Readiness Indicator file check\\\\n2026-01-07T03:33:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:41Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.377078 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:41Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.392617 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:41Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.407727 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:41Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.427737 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:41Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.437246 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.437309 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.437323 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.437349 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.437363 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:41Z","lastTransitionTime":"2026-01-07T03:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.439050 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:41Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.452660 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:41Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.463028 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:41Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.475388 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:41Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.488706 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:41Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.507986 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:41Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.524021 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:41Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.539729 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.539752 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.539761 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.539774 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.539784 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:41Z","lastTransitionTime":"2026-01-07T03:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.641955 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.641989 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.641999 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.642010 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.642020 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:41Z","lastTransitionTime":"2026-01-07T03:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.734753 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:41 crc kubenswrapper[4980]: E0107 03:33:41.734845 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.743973 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.744018 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.744028 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.744041 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.744050 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:41Z","lastTransitionTime":"2026-01-07T03:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.845882 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.846068 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.846134 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.846206 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.846273 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:41Z","lastTransitionTime":"2026-01-07T03:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.948896 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.949191 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.949299 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.949380 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:41 crc kubenswrapper[4980]: I0107 03:33:41.949440 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:41Z","lastTransitionTime":"2026-01-07T03:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.052457 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.052525 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.052536 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.052577 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.052591 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:42Z","lastTransitionTime":"2026-01-07T03:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.155433 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.155844 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.155984 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.156589 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.156747 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:42Z","lastTransitionTime":"2026-01-07T03:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.259984 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.260032 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.260049 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.260076 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.260095 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:42Z","lastTransitionTime":"2026-01-07T03:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.362976 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.363027 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.363045 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.363072 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.363090 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:42Z","lastTransitionTime":"2026-01-07T03:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.465033 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.465369 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.465550 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.465726 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.465875 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:42Z","lastTransitionTime":"2026-01-07T03:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.568832 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.569211 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.569337 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.569475 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.569632 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:42Z","lastTransitionTime":"2026-01-07T03:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.671695 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.671733 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.671742 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.671757 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.671768 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:42Z","lastTransitionTime":"2026-01-07T03:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.735454 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:42 crc kubenswrapper[4980]: E0107 03:33:42.735571 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.735456 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:42 crc kubenswrapper[4980]: E0107 03:33:42.735757 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.736408 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:42 crc kubenswrapper[4980]: E0107 03:33:42.736731 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.774668 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.774895 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.775028 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.775221 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.775360 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:42Z","lastTransitionTime":"2026-01-07T03:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.878649 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.878945 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.879107 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.879242 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.879401 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:42Z","lastTransitionTime":"2026-01-07T03:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.982281 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.982345 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.982364 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.982387 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:42 crc kubenswrapper[4980]: I0107 03:33:42.982405 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:42Z","lastTransitionTime":"2026-01-07T03:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.084624 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.084710 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.084729 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.084759 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.084779 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:43Z","lastTransitionTime":"2026-01-07T03:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.187839 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.187928 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.187975 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.188001 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.188050 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:43Z","lastTransitionTime":"2026-01-07T03:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.291254 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.291789 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.291942 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.292126 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.292282 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:43Z","lastTransitionTime":"2026-01-07T03:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.396027 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.396129 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.396156 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.396197 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.396223 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:43Z","lastTransitionTime":"2026-01-07T03:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.500429 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.500882 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.501030 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.501186 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.501329 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:43Z","lastTransitionTime":"2026-01-07T03:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.603801 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.603863 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.603883 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.603909 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.603929 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:43Z","lastTransitionTime":"2026-01-07T03:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.707579 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.707607 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.707617 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.707633 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.707643 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:43Z","lastTransitionTime":"2026-01-07T03:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.735734 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:43 crc kubenswrapper[4980]: E0107 03:33:43.735973 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.758209 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:43Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.776763 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:43Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.792430 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:43Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.810296 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.810321 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.810330 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.810344 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.810354 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:43Z","lastTransitionTime":"2026-01-07T03:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.811278 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:43Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.827318 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:43Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.847741 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:43Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.877465 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:43Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.894696 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a258cfd1-39e0-429e-9e0a-9b194eb37cc4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2a6fd6463030fc325a1d02660ee60568d07812763b17f0afff926c273155620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9146f4a2fc7846d5688c54a7c7508ef7a9b0982306cf09705764fcc74e3f9597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6a45bda4fadd983cfdd44b4bd0d30119c61a359a83e8380e97ae53ef2b4f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:43Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.911284 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:43Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.912820 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.912868 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.912877 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.912890 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.912899 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:43Z","lastTransitionTime":"2026-01-07T03:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.940381 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:24Z\\\",\\\"message\\\":\\\"Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0107 03:33:24.880795 6701 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:43Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.954937 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j75z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e3c7945-f3cb-4af2-8a0f-19b014123f74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j75z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:43Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.975666 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c5f8b24cd9dff6094a430c5eb226259115e22ab47ce38eeabdc1d675a5c643\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:40Z\\\",\\\"message\\\":\\\"2026-01-07T03:32:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a7eec747-5882-4140-a693-ee9f6ac1a16d\\\\n2026-01-07T03:32:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a7eec747-5882-4140-a693-ee9f6ac1a16d to /host/opt/cni/bin/\\\\n2026-01-07T03:32:55Z [verbose] multus-daemon started\\\\n2026-01-07T03:32:55Z [verbose] Readiness Indicator file check\\\\n2026-01-07T03:33:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:43Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:43 crc kubenswrapper[4980]: I0107 03:33:43.990431 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:43Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.006344 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:44Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.015142 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.015189 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.015202 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.015219 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.015229 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:44Z","lastTransitionTime":"2026-01-07T03:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.025264 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:44Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.041479 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:44Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.060299 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:44Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.081924 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:44Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.117879 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.117945 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.117956 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.117986 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.118000 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:44Z","lastTransitionTime":"2026-01-07T03:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.220355 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.220408 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.220420 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.220439 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.220460 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:44Z","lastTransitionTime":"2026-01-07T03:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.323078 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.323132 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.323151 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.323177 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.323200 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:44Z","lastTransitionTime":"2026-01-07T03:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.426478 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.426521 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.426530 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.426546 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.426579 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:44Z","lastTransitionTime":"2026-01-07T03:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.529485 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.529539 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.529594 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.529619 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.529638 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:44Z","lastTransitionTime":"2026-01-07T03:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.631761 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.631832 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.631850 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.631873 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.631888 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:44Z","lastTransitionTime":"2026-01-07T03:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.734394 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.734450 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.734464 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.734485 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.734499 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:44Z","lastTransitionTime":"2026-01-07T03:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.734700 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.734765 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.734718 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:44 crc kubenswrapper[4980]: E0107 03:33:44.734874 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:44 crc kubenswrapper[4980]: E0107 03:33:44.734943 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:44 crc kubenswrapper[4980]: E0107 03:33:44.735065 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.836871 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.836935 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.836957 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.836984 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.837003 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:44Z","lastTransitionTime":"2026-01-07T03:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.939633 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.939689 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.939707 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.939724 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:44 crc kubenswrapper[4980]: I0107 03:33:44.939736 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:44Z","lastTransitionTime":"2026-01-07T03:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.042624 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.042674 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.042684 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.042701 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.042712 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:45Z","lastTransitionTime":"2026-01-07T03:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.144990 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.145059 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.145077 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.145103 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.145123 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:45Z","lastTransitionTime":"2026-01-07T03:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.247875 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.247955 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.247978 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.248007 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.248028 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:45Z","lastTransitionTime":"2026-01-07T03:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.350970 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.351041 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.351061 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.351090 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.351113 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:45Z","lastTransitionTime":"2026-01-07T03:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.453381 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.453486 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.453507 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.453533 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.453586 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:45Z","lastTransitionTime":"2026-01-07T03:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.556928 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.557012 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.557028 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.557053 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.557068 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:45Z","lastTransitionTime":"2026-01-07T03:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.660163 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.660208 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.660220 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.660239 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.660251 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:45Z","lastTransitionTime":"2026-01-07T03:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.735646 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:45 crc kubenswrapper[4980]: E0107 03:33:45.735833 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.762867 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.762938 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.762955 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.762982 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.763000 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:45Z","lastTransitionTime":"2026-01-07T03:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.865106 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.865210 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.865226 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.865248 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.865268 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:45Z","lastTransitionTime":"2026-01-07T03:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.967757 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.967819 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.967838 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.967866 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:45 crc kubenswrapper[4980]: I0107 03:33:45.967887 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:45Z","lastTransitionTime":"2026-01-07T03:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.070743 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.070813 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.070832 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.070856 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.070871 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:46Z","lastTransitionTime":"2026-01-07T03:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.174214 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.174278 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.174299 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.174333 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.174351 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:46Z","lastTransitionTime":"2026-01-07T03:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.280012 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.280106 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.280139 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.280177 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.280205 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:46Z","lastTransitionTime":"2026-01-07T03:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.383667 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.383873 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.384007 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.384149 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.384279 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:46Z","lastTransitionTime":"2026-01-07T03:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.486680 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.486720 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.486732 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.486752 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.486767 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:46Z","lastTransitionTime":"2026-01-07T03:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.590042 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.590260 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.590420 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.590597 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.590794 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:46Z","lastTransitionTime":"2026-01-07T03:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.694826 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.694896 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.694916 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.694950 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.694971 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:46Z","lastTransitionTime":"2026-01-07T03:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.734609 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.734764 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.734666 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:46 crc kubenswrapper[4980]: E0107 03:33:46.734792 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:46 crc kubenswrapper[4980]: E0107 03:33:46.734959 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:46 crc kubenswrapper[4980]: E0107 03:33:46.735101 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.798474 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.798765 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.798917 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.799073 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.799205 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:46Z","lastTransitionTime":"2026-01-07T03:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.902434 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.902500 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.902524 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.902582 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:46 crc kubenswrapper[4980]: I0107 03:33:46.902602 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:46Z","lastTransitionTime":"2026-01-07T03:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.005783 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.006239 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.006402 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.006587 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.006817 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:47Z","lastTransitionTime":"2026-01-07T03:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.110383 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.110443 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.110462 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.110488 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.110507 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:47Z","lastTransitionTime":"2026-01-07T03:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.213876 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.213938 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.213955 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.214007 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.214026 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:47Z","lastTransitionTime":"2026-01-07T03:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.316734 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.316798 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.316816 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.316841 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.316861 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:47Z","lastTransitionTime":"2026-01-07T03:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.420025 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.420098 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.420115 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.420141 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.420160 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:47Z","lastTransitionTime":"2026-01-07T03:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.523318 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.523372 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.523389 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.523412 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.523434 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:47Z","lastTransitionTime":"2026-01-07T03:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.626086 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.626139 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.626155 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.626179 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.626197 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:47Z","lastTransitionTime":"2026-01-07T03:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.729182 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.729250 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.729268 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.729295 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.729314 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:47Z","lastTransitionTime":"2026-01-07T03:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.735663 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:47 crc kubenswrapper[4980]: E0107 03:33:47.736072 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.749617 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.832049 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.832089 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.832098 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.832112 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.832122 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:47Z","lastTransitionTime":"2026-01-07T03:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.933785 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.933838 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.933849 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.933870 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:47 crc kubenswrapper[4980]: I0107 03:33:47.933883 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:47Z","lastTransitionTime":"2026-01-07T03:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.036214 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.036263 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.036274 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.036290 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.036302 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:48Z","lastTransitionTime":"2026-01-07T03:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.141219 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.141262 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.141273 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.141291 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.141307 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:48Z","lastTransitionTime":"2026-01-07T03:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.215340 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.215377 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.215385 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.215399 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.215409 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:48Z","lastTransitionTime":"2026-01-07T03:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:48 crc kubenswrapper[4980]: E0107 03:33:48.237253 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:48Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.243867 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.243963 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.243991 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.244073 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.244103 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:48Z","lastTransitionTime":"2026-01-07T03:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:48 crc kubenswrapper[4980]: E0107 03:33:48.264383 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:48Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.269613 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.269686 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.269706 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.269734 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.269757 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:48Z","lastTransitionTime":"2026-01-07T03:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:48 crc kubenswrapper[4980]: E0107 03:33:48.287820 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:48Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.292610 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.292677 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.292698 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.292725 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.292748 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:48Z","lastTransitionTime":"2026-01-07T03:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:48 crc kubenswrapper[4980]: E0107 03:33:48.314838 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:48Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.319959 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.320020 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.320039 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.320068 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.320086 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:48Z","lastTransitionTime":"2026-01-07T03:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:48 crc kubenswrapper[4980]: E0107 03:33:48.341006 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:48Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:48 crc kubenswrapper[4980]: E0107 03:33:48.341226 4980 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.343294 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.343354 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.343373 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.343399 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.343420 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:48Z","lastTransitionTime":"2026-01-07T03:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.446117 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.446161 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.446172 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.446190 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.446202 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:48Z","lastTransitionTime":"2026-01-07T03:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.548047 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.548114 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.548128 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.548147 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.548160 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:48Z","lastTransitionTime":"2026-01-07T03:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.650899 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.651039 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.651072 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.651101 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.651120 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:48Z","lastTransitionTime":"2026-01-07T03:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.735210 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:48 crc kubenswrapper[4980]: E0107 03:33:48.735405 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.735796 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:48 crc kubenswrapper[4980]: E0107 03:33:48.735920 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.736151 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:48 crc kubenswrapper[4980]: E0107 03:33:48.736252 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.754992 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.755065 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.755089 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.755119 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.755145 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:48Z","lastTransitionTime":"2026-01-07T03:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.858829 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.858897 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.858916 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.858942 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.858959 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:48Z","lastTransitionTime":"2026-01-07T03:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.963500 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.963588 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.963606 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.963629 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:48 crc kubenswrapper[4980]: I0107 03:33:48.963646 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:48Z","lastTransitionTime":"2026-01-07T03:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.067041 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.067137 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.067162 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.067191 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.067214 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:49Z","lastTransitionTime":"2026-01-07T03:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.170659 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.170709 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.170729 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.170751 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.170767 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:49Z","lastTransitionTime":"2026-01-07T03:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.274294 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.274356 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.274373 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.274398 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.274417 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:49Z","lastTransitionTime":"2026-01-07T03:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.377862 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.377922 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.377938 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.377962 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.377982 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:49Z","lastTransitionTime":"2026-01-07T03:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.481384 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.481440 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.481458 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.481480 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.481497 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:49Z","lastTransitionTime":"2026-01-07T03:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.584848 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.585013 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.585042 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.585133 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.585155 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:49Z","lastTransitionTime":"2026-01-07T03:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.688175 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.688247 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.688266 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.688293 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.688311 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:49Z","lastTransitionTime":"2026-01-07T03:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.735507 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:49 crc kubenswrapper[4980]: E0107 03:33:49.735766 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.791544 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.791718 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.791739 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.791766 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.791786 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:49Z","lastTransitionTime":"2026-01-07T03:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.894162 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.894203 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.894213 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.894227 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.894236 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:49Z","lastTransitionTime":"2026-01-07T03:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.996739 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.996778 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.996788 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.996799 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:49 crc kubenswrapper[4980]: I0107 03:33:49.996808 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:49Z","lastTransitionTime":"2026-01-07T03:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.098847 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.098878 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.098886 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.098896 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.098904 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:50Z","lastTransitionTime":"2026-01-07T03:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.201505 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.201604 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.201623 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.201648 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.201667 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:50Z","lastTransitionTime":"2026-01-07T03:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.304629 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.304703 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.304723 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.304748 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.304767 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:50Z","lastTransitionTime":"2026-01-07T03:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.407408 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.407505 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.407522 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.407543 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.407594 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:50Z","lastTransitionTime":"2026-01-07T03:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.510087 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.510177 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.510227 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.510250 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.510267 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:50Z","lastTransitionTime":"2026-01-07T03:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.613503 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.613602 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.613621 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.613652 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.613670 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:50Z","lastTransitionTime":"2026-01-07T03:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.716797 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.716865 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.716884 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.716910 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.716931 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:50Z","lastTransitionTime":"2026-01-07T03:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.735522 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.735593 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:50 crc kubenswrapper[4980]: E0107 03:33:50.735708 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.735749 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:50 crc kubenswrapper[4980]: E0107 03:33:50.736223 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:50 crc kubenswrapper[4980]: E0107 03:33:50.736431 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.736801 4980 scope.go:117] "RemoveContainer" containerID="387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.820341 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.820412 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.820439 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.820468 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.820490 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:50Z","lastTransitionTime":"2026-01-07T03:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.923711 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.924383 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.924412 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.924445 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:50 crc kubenswrapper[4980]: I0107 03:33:50.924469 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:50Z","lastTransitionTime":"2026-01-07T03:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.027878 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.027946 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.027971 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.028000 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.028026 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:51Z","lastTransitionTime":"2026-01-07T03:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.131500 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.131586 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.131605 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.131629 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.131646 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:51Z","lastTransitionTime":"2026-01-07T03:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.234842 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.234892 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.234910 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.234939 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.234961 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:51Z","lastTransitionTime":"2026-01-07T03:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.301924 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovnkube-controller/2.log" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.305000 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerStarted","Data":"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac"} Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.305600 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.324791 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a258cfd1-39e0-429e-9e0a-9b194eb37cc4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2a6fd6463030fc325a1d02660ee60568d07812763b17f0afff926c273155620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9146f4a2fc7846d5688c54a7c7508ef7a9b0982306cf09705764fcc74e3f9597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6a45bda4fadd983cfdd44b4bd0d30119c61a359a83e8380e97ae53ef2b4f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.337200 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.337249 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.337265 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.337327 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.337348 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:51Z","lastTransitionTime":"2026-01-07T03:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.342473 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.370323 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:24Z\\\",\\\"message\\\":\\\"Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0107 03:33:24.880795 6701 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.381349 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j75z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e3c7945-f3cb-4af2-8a0f-19b014123f74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j75z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.393772 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"160280d9-c513-4102-9613-b3164459b751\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523d0471ddbf76f60815470ee971e8b259c45255a4e7fd7844ca9b258b4bcbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5aa770dddd3c34cae9a89f2fbbaf2180eb5806efb970bcb176c9948126c5934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5aa770dddd3c34cae9a89f2fbbaf2180eb5806efb970bcb176c9948126c5934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.423875 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.438184 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.440311 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.440366 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.440383 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.440407 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.440424 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:51Z","lastTransitionTime":"2026-01-07T03:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.453978 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.469831 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c5f8b24cd9dff6094a430c5eb226259115e22ab47ce38eeabdc1d675a5c643\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:40Z\\\",\\\"message\\\":\\\"2026-01-07T03:32:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a7eec747-5882-4140-a693-ee9f6ac1a16d\\\\n2026-01-07T03:32:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a7eec747-5882-4140-a693-ee9f6ac1a16d to /host/opt/cni/bin/\\\\n2026-01-07T03:32:55Z [verbose] multus-daemon started\\\\n2026-01-07T03:32:55Z [verbose] Readiness Indicator file check\\\\n2026-01-07T03:33:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.485867 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.508701 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.529120 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.543864 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.543922 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.543940 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.543964 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.543993 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:51Z","lastTransitionTime":"2026-01-07T03:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.546943 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.565688 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.583343 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.597528 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.611134 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.629923 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.647382 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:51Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.647454 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.647502 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.647520 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.647544 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.647596 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:51Z","lastTransitionTime":"2026-01-07T03:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.735281 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:51 crc kubenswrapper[4980]: E0107 03:33:51.735461 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.749897 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.750010 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.750031 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.750054 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.750073 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:51Z","lastTransitionTime":"2026-01-07T03:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.852346 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.852398 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.852410 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.852427 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.852440 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:51Z","lastTransitionTime":"2026-01-07T03:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.955901 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.955968 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.955981 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.956000 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:51 crc kubenswrapper[4980]: I0107 03:33:51.956011 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:51Z","lastTransitionTime":"2026-01-07T03:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.058595 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.058642 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.058653 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.058681 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.058693 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:52Z","lastTransitionTime":"2026-01-07T03:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.161346 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.161401 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.161418 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.161440 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.161459 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:52Z","lastTransitionTime":"2026-01-07T03:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.264313 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.264375 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.264392 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.264418 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.264436 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:52Z","lastTransitionTime":"2026-01-07T03:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.367639 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.367729 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.367747 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.367770 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.367808 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:52Z","lastTransitionTime":"2026-01-07T03:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.470965 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.471031 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.471042 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.471060 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.471072 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:52Z","lastTransitionTime":"2026-01-07T03:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.574111 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.574190 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.574213 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.574238 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.574260 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:52Z","lastTransitionTime":"2026-01-07T03:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.676814 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.676874 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.676891 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.676915 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.676932 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:52Z","lastTransitionTime":"2026-01-07T03:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.735174 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.735246 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.735349 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:52 crc kubenswrapper[4980]: E0107 03:33:52.735621 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:52 crc kubenswrapper[4980]: E0107 03:33:52.735719 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:52 crc kubenswrapper[4980]: E0107 03:33:52.735907 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.779002 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.779042 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.779053 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.779068 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.779078 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:52Z","lastTransitionTime":"2026-01-07T03:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.881822 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.881924 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.881940 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.881963 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.881982 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:52Z","lastTransitionTime":"2026-01-07T03:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.984352 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.984406 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.984415 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.984430 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:52 crc kubenswrapper[4980]: I0107 03:33:52.984438 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:52Z","lastTransitionTime":"2026-01-07T03:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.087446 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.087503 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.087520 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.087545 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.087595 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:53Z","lastTransitionTime":"2026-01-07T03:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.190046 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.190080 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.190088 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.190104 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.190113 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:53Z","lastTransitionTime":"2026-01-07T03:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.292969 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.293027 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.293044 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.293069 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.293086 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:53Z","lastTransitionTime":"2026-01-07T03:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.314262 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovnkube-controller/3.log" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.314933 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovnkube-controller/2.log" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.318141 4980 generic.go:334] "Generic (PLEG): container finished" podID="6c962a95-c8ed-4d65-810e-1da967416c06" containerID="c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac" exitCode=1 Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.318193 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerDied","Data":"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac"} Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.318244 4980 scope.go:117] "RemoveContainer" containerID="387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.319602 4980 scope.go:117] "RemoveContainer" containerID="c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac" Jan 07 03:33:53 crc kubenswrapper[4980]: E0107 03:33:53.320010 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.346236 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.367114 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.386623 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.397186 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.397246 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.397262 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.397286 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.397304 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:53Z","lastTransitionTime":"2026-01-07T03:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.408476 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.428774 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.443333 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.463196 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"160280d9-c513-4102-9613-b3164459b751\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523d0471ddbf76f60815470ee971e8b259c45255a4e7fd7844ca9b258b4bcbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5aa770dddd3c34cae9a89f2fbbaf2180eb5806efb970bcb176c9948126c5934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5aa770dddd3c34cae9a89f2fbbaf2180eb5806efb970bcb176c9948126c5934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.495496 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.503597 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.503640 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.503652 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.503669 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.503682 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:53Z","lastTransitionTime":"2026-01-07T03:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.520782 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a258cfd1-39e0-429e-9e0a-9b194eb37cc4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2a6fd6463030fc325a1d02660ee60568d07812763b17f0afff926c273155620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9146f4a2fc7846d5688c54a7c7508ef7a9b0982306cf09705764fcc74e3f9597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6a45bda4fadd983cfdd44b4bd0d30119c61a359a83e8380e97ae53ef2b4f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.536140 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.554719 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:24Z\\\",\\\"message\\\":\\\"Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0107 03:33:24.880795 6701 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:52Z\\\",\\\"message\\\":\\\"46444c4-trplf openshift-network-operator/iptables-alerter-4ln5h openshift-dns/node-resolver-nv5s5 openshift-kube-controller-manager/kube-controller-manager-crc openshift-multus/multus-9ct5r openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-node-identity/network-node-identity-vrzqb openshift-etcd/etcd-crc openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/kube-rbac-proxy-crio-crc openshift-machine-config-operator/machine-config-daemon-hzlt6 openshift-multus/multus-additional-cni-plugins-rwpf2 openshift-multus/network-metrics-daemon-j75z7 openshift-network-diagnostics/network-check-target-xd92c openshift-image-registry/node-ca-9pk7v]\\\\nI0107 03:33:52.337498 7096 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0107 03:33:52.337521 7096 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-9pk7v\\\\nF0107 03:33:52.337532 7096 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.566196 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j75z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e3c7945-f3cb-4af2-8a0f-19b014123f74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j75z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.582224 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.602237 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c5f8b24cd9dff6094a430c5eb226259115e22ab47ce38eeabdc1d675a5c643\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:40Z\\\",\\\"message\\\":\\\"2026-01-07T03:32:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a7eec747-5882-4140-a693-ee9f6ac1a16d\\\\n2026-01-07T03:32:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a7eec747-5882-4140-a693-ee9f6ac1a16d to /host/opt/cni/bin/\\\\n2026-01-07T03:32:55Z [verbose] multus-daemon started\\\\n2026-01-07T03:32:55Z [verbose] Readiness Indicator file check\\\\n2026-01-07T03:33:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.606737 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.606825 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.606844 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.606900 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.606919 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:53Z","lastTransitionTime":"2026-01-07T03:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.621040 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.642727 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.666822 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.683731 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.701430 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.710150 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.710219 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.710239 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.710264 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.710282 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:53Z","lastTransitionTime":"2026-01-07T03:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.735167 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:53 crc kubenswrapper[4980]: E0107 03:33:53.735377 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.757322 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.778459 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.805617 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.812913 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.812967 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.812984 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.813009 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.813029 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:53Z","lastTransitionTime":"2026-01-07T03:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.826702 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.850075 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.867651 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.885286 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"160280d9-c513-4102-9613-b3164459b751\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523d0471ddbf76f60815470ee971e8b259c45255a4e7fd7844ca9b258b4bcbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5aa770dddd3c34cae9a89f2fbbaf2180eb5806efb970bcb176c9948126c5934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5aa770dddd3c34cae9a89f2fbbaf2180eb5806efb970bcb176c9948126c5934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.914708 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.915357 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.915386 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.915397 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.915417 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.915428 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:53Z","lastTransitionTime":"2026-01-07T03:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.928301 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a258cfd1-39e0-429e-9e0a-9b194eb37cc4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2a6fd6463030fc325a1d02660ee60568d07812763b17f0afff926c273155620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9146f4a2fc7846d5688c54a7c7508ef7a9b0982306cf09705764fcc74e3f9597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6a45bda4fadd983cfdd44b4bd0d30119c61a359a83e8380e97ae53ef2b4f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:53 crc kubenswrapper[4980]: I0107 03:33:53.973223 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:53Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.003473 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:24Z\\\",\\\"message\\\":\\\"Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0107 03:33:24.880795 6701 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:52Z\\\",\\\"message\\\":\\\"46444c4-trplf openshift-network-operator/iptables-alerter-4ln5h openshift-dns/node-resolver-nv5s5 openshift-kube-controller-manager/kube-controller-manager-crc openshift-multus/multus-9ct5r openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-node-identity/network-node-identity-vrzqb openshift-etcd/etcd-crc openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/kube-rbac-proxy-crio-crc openshift-machine-config-operator/machine-config-daemon-hzlt6 openshift-multus/multus-additional-cni-plugins-rwpf2 openshift-multus/network-metrics-daemon-j75z7 openshift-network-diagnostics/network-check-target-xd92c openshift-image-registry/node-ca-9pk7v]\\\\nI0107 03:33:52.337498 7096 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0107 03:33:52.337521 7096 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-9pk7v\\\\nF0107 03:33:52.337532 7096 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.014933 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j75z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e3c7945-f3cb-4af2-8a0f-19b014123f74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j75z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.017450 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.017474 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.017482 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.017494 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.017503 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:54Z","lastTransitionTime":"2026-01-07T03:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.031057 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.048241 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c5f8b24cd9dff6094a430c5eb226259115e22ab47ce38eeabdc1d675a5c643\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:40Z\\\",\\\"message\\\":\\\"2026-01-07T03:32:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a7eec747-5882-4140-a693-ee9f6ac1a16d\\\\n2026-01-07T03:32:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a7eec747-5882-4140-a693-ee9f6ac1a16d to /host/opt/cni/bin/\\\\n2026-01-07T03:32:55Z [verbose] multus-daemon started\\\\n2026-01-07T03:32:55Z [verbose] Readiness Indicator file check\\\\n2026-01-07T03:33:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.062868 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.081097 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.101232 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.117718 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.121322 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.121388 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.121407 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.121764 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.121806 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:54Z","lastTransitionTime":"2026-01-07T03:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.134711 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:54Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.225108 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.225163 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.225183 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.225209 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.225228 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:54Z","lastTransitionTime":"2026-01-07T03:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.324104 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovnkube-controller/3.log" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.327586 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.327638 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.327656 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.327680 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.327698 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:54Z","lastTransitionTime":"2026-01-07T03:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.431754 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.431831 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.431858 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.431893 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.431923 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:54Z","lastTransitionTime":"2026-01-07T03:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.535511 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.535652 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.535678 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.535710 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.535733 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:54Z","lastTransitionTime":"2026-01-07T03:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.639034 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.639090 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.639105 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.639127 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.639142 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:54Z","lastTransitionTime":"2026-01-07T03:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.735531 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:54 crc kubenswrapper[4980]: E0107 03:33:54.735745 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.735862 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.735870 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:54 crc kubenswrapper[4980]: E0107 03:33:54.736094 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:54 crc kubenswrapper[4980]: E0107 03:33:54.736176 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.742240 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.742334 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.742351 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.742380 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.742398 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:54Z","lastTransitionTime":"2026-01-07T03:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.845018 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.845111 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.845132 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.845190 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.845208 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:54Z","lastTransitionTime":"2026-01-07T03:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.948241 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.948287 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.948303 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.948325 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:54 crc kubenswrapper[4980]: I0107 03:33:54.948341 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:54Z","lastTransitionTime":"2026-01-07T03:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.102209 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.102253 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.102264 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.102281 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.102292 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:55Z","lastTransitionTime":"2026-01-07T03:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.205414 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.205467 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.205483 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.205504 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.205522 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:55Z","lastTransitionTime":"2026-01-07T03:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.308473 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.308526 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.308542 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.308587 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.308604 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:55Z","lastTransitionTime":"2026-01-07T03:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.410721 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.410793 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.410815 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.410840 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.410859 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:55Z","lastTransitionTime":"2026-01-07T03:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.513919 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.513982 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.514007 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.514040 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.514061 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:55Z","lastTransitionTime":"2026-01-07T03:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.616612 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.616664 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.616680 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.616700 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.616718 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:55Z","lastTransitionTime":"2026-01-07T03:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.719205 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.719262 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.719279 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.719303 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.719325 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:55Z","lastTransitionTime":"2026-01-07T03:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.735352 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:55 crc kubenswrapper[4980]: E0107 03:33:55.735530 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.822283 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.822364 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.822391 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.822421 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.822439 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:55Z","lastTransitionTime":"2026-01-07T03:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.925622 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.925684 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.925703 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.925725 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:55 crc kubenswrapper[4980]: I0107 03:33:55.925741 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:55Z","lastTransitionTime":"2026-01-07T03:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.028683 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.028741 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.028759 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.028786 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.028804 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:56Z","lastTransitionTime":"2026-01-07T03:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.131455 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.131512 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.131533 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.131590 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.131612 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:56Z","lastTransitionTime":"2026-01-07T03:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.233879 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.233922 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.233935 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.233952 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.233965 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:56Z","lastTransitionTime":"2026-01-07T03:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.341644 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.341702 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.341721 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.341743 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.341760 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:56Z","lastTransitionTime":"2026-01-07T03:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.444604 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.444639 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.444647 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.444659 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.444669 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:56Z","lastTransitionTime":"2026-01-07T03:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.552360 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.552402 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.552415 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.552434 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.552819 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:56Z","lastTransitionTime":"2026-01-07T03:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.557164 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:33:56 crc kubenswrapper[4980]: E0107 03:33:56.557298 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:35:00.557278963 +0000 UTC m=+147.122973708 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.557375 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.557407 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.557430 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:56 crc kubenswrapper[4980]: E0107 03:33:56.557531 4980 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 03:33:56 crc kubenswrapper[4980]: E0107 03:33:56.557592 4980 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 03:33:56 crc kubenswrapper[4980]: E0107 03:33:56.557642 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-07 03:35:00.557630423 +0000 UTC m=+147.123325158 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 03:33:56 crc kubenswrapper[4980]: E0107 03:33:56.557657 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-07 03:35:00.557649793 +0000 UTC m=+147.123344528 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 03:33:56 crc kubenswrapper[4980]: E0107 03:33:56.557678 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 03:33:56 crc kubenswrapper[4980]: E0107 03:33:56.557723 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 03:33:56 crc kubenswrapper[4980]: E0107 03:33:56.557749 4980 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:33:56 crc kubenswrapper[4980]: E0107 03:33:56.557841 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-07 03:35:00.557815168 +0000 UTC m=+147.123509953 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.655713 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.655770 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.655792 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.655821 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.655844 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:56Z","lastTransitionTime":"2026-01-07T03:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.658298 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:56 crc kubenswrapper[4980]: E0107 03:33:56.658544 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 03:33:56 crc kubenswrapper[4980]: E0107 03:33:56.658617 4980 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 03:33:56 crc kubenswrapper[4980]: E0107 03:33:56.658636 4980 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:33:56 crc kubenswrapper[4980]: E0107 03:33:56.658713 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-07 03:35:00.658691143 +0000 UTC m=+147.224385918 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.735534 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.735711 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.735544 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:56 crc kubenswrapper[4980]: E0107 03:33:56.735815 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:56 crc kubenswrapper[4980]: E0107 03:33:56.735996 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:56 crc kubenswrapper[4980]: E0107 03:33:56.736165 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.759090 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.759154 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.759181 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.759211 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.759239 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:56Z","lastTransitionTime":"2026-01-07T03:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.862464 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.862574 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.862593 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.862625 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.862644 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:56Z","lastTransitionTime":"2026-01-07T03:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.967302 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.967340 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.967348 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.967386 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:56 crc kubenswrapper[4980]: I0107 03:33:56.967396 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:56Z","lastTransitionTime":"2026-01-07T03:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.070297 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.070340 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.070350 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.070366 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.070378 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:57Z","lastTransitionTime":"2026-01-07T03:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.174043 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.174091 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.174147 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.174174 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.174510 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:57Z","lastTransitionTime":"2026-01-07T03:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.278260 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.278639 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.278658 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.278680 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.278696 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:57Z","lastTransitionTime":"2026-01-07T03:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.382085 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.382185 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.382203 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.382229 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.382247 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:57Z","lastTransitionTime":"2026-01-07T03:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.485354 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.485444 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.485470 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.485499 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.485521 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:57Z","lastTransitionTime":"2026-01-07T03:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.588768 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.588836 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.588855 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.588879 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.588897 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:57Z","lastTransitionTime":"2026-01-07T03:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.692073 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.692134 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.692151 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.692183 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.692234 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:57Z","lastTransitionTime":"2026-01-07T03:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.735276 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:57 crc kubenswrapper[4980]: E0107 03:33:57.735466 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.795989 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.796043 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.796060 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.796081 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.796099 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:57Z","lastTransitionTime":"2026-01-07T03:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.898975 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.899029 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.899046 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.899066 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:57 crc kubenswrapper[4980]: I0107 03:33:57.899082 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:57Z","lastTransitionTime":"2026-01-07T03:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.001972 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.002022 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.002039 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.002062 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.002078 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:58Z","lastTransitionTime":"2026-01-07T03:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.104905 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.104970 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.104988 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.105011 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.105029 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:58Z","lastTransitionTime":"2026-01-07T03:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.207642 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.207703 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.207723 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.207746 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.207763 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:58Z","lastTransitionTime":"2026-01-07T03:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.310973 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.311049 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.311077 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.311109 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.311134 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:58Z","lastTransitionTime":"2026-01-07T03:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.414223 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.414284 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.414306 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.414332 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.414350 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:58Z","lastTransitionTime":"2026-01-07T03:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.517598 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.517645 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.517654 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.517674 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.517685 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:58Z","lastTransitionTime":"2026-01-07T03:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.620631 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.620707 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.620752 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.620777 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.620801 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:58Z","lastTransitionTime":"2026-01-07T03:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.660356 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.660407 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.660426 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.660446 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.660462 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:58Z","lastTransitionTime":"2026-01-07T03:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:58 crc kubenswrapper[4980]: E0107 03:33:58.681869 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.687052 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.687130 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.687147 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.687174 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.687193 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:58Z","lastTransitionTime":"2026-01-07T03:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:58 crc kubenswrapper[4980]: E0107 03:33:58.705064 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.710015 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.710108 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.710128 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.710153 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.710175 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:58Z","lastTransitionTime":"2026-01-07T03:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:58 crc kubenswrapper[4980]: E0107 03:33:58.731116 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.734838 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:33:58 crc kubenswrapper[4980]: E0107 03:33:58.735062 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.735116 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.735149 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:33:58 crc kubenswrapper[4980]: E0107 03:33:58.735273 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:33:58 crc kubenswrapper[4980]: E0107 03:33:58.735405 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.740517 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.740684 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.740704 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.740734 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.740758 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:58Z","lastTransitionTime":"2026-01-07T03:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:58 crc kubenswrapper[4980]: E0107 03:33:58.765361 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.770511 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.770547 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.770583 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.770604 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.770618 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:58Z","lastTransitionTime":"2026-01-07T03:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:58 crc kubenswrapper[4980]: E0107 03:33:58.789650 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:33:58Z is after 2025-08-24T17:21:41Z" Jan 07 03:33:58 crc kubenswrapper[4980]: E0107 03:33:58.789810 4980 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.792231 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.792292 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.792311 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.792337 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.792356 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:58Z","lastTransitionTime":"2026-01-07T03:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.895763 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.895816 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.895835 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.895862 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.895881 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:58Z","lastTransitionTime":"2026-01-07T03:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.998692 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.998767 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.998785 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.998813 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:58 crc kubenswrapper[4980]: I0107 03:33:58.998834 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:58Z","lastTransitionTime":"2026-01-07T03:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.102614 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.102679 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.102697 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.102721 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.102740 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:59Z","lastTransitionTime":"2026-01-07T03:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.205301 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.205395 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.205415 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.205443 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.205463 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:59Z","lastTransitionTime":"2026-01-07T03:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.308364 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.308443 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.308464 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.308492 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.308511 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:59Z","lastTransitionTime":"2026-01-07T03:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.411838 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.411898 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.411915 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.411943 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.411961 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:59Z","lastTransitionTime":"2026-01-07T03:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.514731 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.514793 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.514819 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.514845 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.514865 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:59Z","lastTransitionTime":"2026-01-07T03:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.617473 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.617525 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.617549 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.617618 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.617641 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:59Z","lastTransitionTime":"2026-01-07T03:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.720617 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.720677 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.720705 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.720733 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.720754 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:59Z","lastTransitionTime":"2026-01-07T03:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.735468 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:33:59 crc kubenswrapper[4980]: E0107 03:33:59.735703 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.824498 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.824603 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.824624 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.824651 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.824670 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:59Z","lastTransitionTime":"2026-01-07T03:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.926896 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.926962 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.926985 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.927012 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:33:59 crc kubenswrapper[4980]: I0107 03:33:59.927030 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:33:59Z","lastTransitionTime":"2026-01-07T03:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.029942 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.030000 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.030019 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.030043 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.030060 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:00Z","lastTransitionTime":"2026-01-07T03:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.132724 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.132789 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.132806 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.132831 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.132851 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:00Z","lastTransitionTime":"2026-01-07T03:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.235195 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.235251 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.235262 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.235279 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.235290 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:00Z","lastTransitionTime":"2026-01-07T03:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.338224 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.338297 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.338319 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.338347 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.338365 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:00Z","lastTransitionTime":"2026-01-07T03:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.441905 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.442001 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.442021 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.442051 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.442073 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:00Z","lastTransitionTime":"2026-01-07T03:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.545143 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.545203 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.545224 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.545253 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.545276 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:00Z","lastTransitionTime":"2026-01-07T03:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.648086 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.648189 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.648214 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.648246 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.648271 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:00Z","lastTransitionTime":"2026-01-07T03:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.734937 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.735114 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:00 crc kubenswrapper[4980]: E0107 03:34:00.735351 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.735381 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:00 crc kubenswrapper[4980]: E0107 03:34:00.735533 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:00 crc kubenswrapper[4980]: E0107 03:34:00.735769 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.751154 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.751205 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.751227 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.751254 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.751278 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:00Z","lastTransitionTime":"2026-01-07T03:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.854258 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.854320 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.854337 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.854363 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.854381 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:00Z","lastTransitionTime":"2026-01-07T03:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.957091 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.958170 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.958231 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.958268 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:00 crc kubenswrapper[4980]: I0107 03:34:00.958317 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:00Z","lastTransitionTime":"2026-01-07T03:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.062284 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.062360 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.062378 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.062405 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.062424 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:01Z","lastTransitionTime":"2026-01-07T03:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.165970 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.166042 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.166061 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.166088 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.166107 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:01Z","lastTransitionTime":"2026-01-07T03:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.271449 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.271522 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.271619 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.271695 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.271721 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:01Z","lastTransitionTime":"2026-01-07T03:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.374605 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.374694 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.374711 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.374735 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.374753 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:01Z","lastTransitionTime":"2026-01-07T03:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.478190 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.478267 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.478290 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.478328 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.478384 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:01Z","lastTransitionTime":"2026-01-07T03:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.582365 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.582501 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.582522 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.582547 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.582597 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:01Z","lastTransitionTime":"2026-01-07T03:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.686384 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.686451 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.686512 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.686543 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.686566 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:01Z","lastTransitionTime":"2026-01-07T03:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.735905 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:01 crc kubenswrapper[4980]: E0107 03:34:01.736145 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.789936 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.789992 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.790002 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.790022 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.790035 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:01Z","lastTransitionTime":"2026-01-07T03:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.892881 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.892943 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.892955 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.892974 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.892984 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:01Z","lastTransitionTime":"2026-01-07T03:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.995639 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.995709 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.995727 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.995755 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:01 crc kubenswrapper[4980]: I0107 03:34:01.995777 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:01Z","lastTransitionTime":"2026-01-07T03:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.099113 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.099160 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.099175 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.099193 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.099205 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:02Z","lastTransitionTime":"2026-01-07T03:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.203132 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.203181 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.203190 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.203206 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.203216 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:02Z","lastTransitionTime":"2026-01-07T03:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.306985 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.307067 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.307086 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.307113 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.307134 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:02Z","lastTransitionTime":"2026-01-07T03:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.410839 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.410924 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.410947 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.410974 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.410994 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:02Z","lastTransitionTime":"2026-01-07T03:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.515181 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.515273 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.515296 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.515326 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.515346 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:02Z","lastTransitionTime":"2026-01-07T03:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.619073 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.619154 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.619173 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.619200 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.619218 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:02Z","lastTransitionTime":"2026-01-07T03:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.722005 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.722085 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.722116 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.722150 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.722173 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:02Z","lastTransitionTime":"2026-01-07T03:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.735206 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.735315 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.735402 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:02 crc kubenswrapper[4980]: E0107 03:34:02.735631 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:02 crc kubenswrapper[4980]: E0107 03:34:02.735813 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:02 crc kubenswrapper[4980]: E0107 03:34:02.736024 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.825299 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.825367 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.825387 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.825413 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.825432 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:02Z","lastTransitionTime":"2026-01-07T03:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.928485 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.928541 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.928595 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.928622 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:02 crc kubenswrapper[4980]: I0107 03:34:02.928638 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:02Z","lastTransitionTime":"2026-01-07T03:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.031693 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.031762 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.031780 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.031808 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.031825 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:03Z","lastTransitionTime":"2026-01-07T03:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.134606 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.134677 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.134698 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.134722 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.134740 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:03Z","lastTransitionTime":"2026-01-07T03:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.237332 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.237390 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.237406 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.237429 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.237448 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:03Z","lastTransitionTime":"2026-01-07T03:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.340275 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.340333 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.340349 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.340374 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.340391 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:03Z","lastTransitionTime":"2026-01-07T03:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.442535 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.442619 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.442631 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.442651 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.442661 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:03Z","lastTransitionTime":"2026-01-07T03:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.544978 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.545039 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.545056 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.545081 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.545099 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:03Z","lastTransitionTime":"2026-01-07T03:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.647908 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.647961 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.647975 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.647994 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.648009 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:03Z","lastTransitionTime":"2026-01-07T03:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.734652 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:03 crc kubenswrapper[4980]: E0107 03:34:03.734786 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.749757 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.749819 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.749835 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.749860 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.749878 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:03Z","lastTransitionTime":"2026-01-07T03:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.757781 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.777929 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.798188 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.814710 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.831741 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.851045 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.852733 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.852783 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.852801 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.852825 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.852843 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:03Z","lastTransitionTime":"2026-01-07T03:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.912321 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.931603 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a258cfd1-39e0-429e-9e0a-9b194eb37cc4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2a6fd6463030fc325a1d02660ee60568d07812763b17f0afff926c273155620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9146f4a2fc7846d5688c54a7c7508ef7a9b0982306cf09705764fcc74e3f9597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6a45bda4fadd983cfdd44b4bd0d30119c61a359a83e8380e97ae53ef2b4f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.949964 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.955038 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.955087 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.955103 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.955126 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.955147 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:03Z","lastTransitionTime":"2026-01-07T03:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.981871 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://387efea1fc5ee6cbd65926094045fcbb6059440a44173f181f6b0e1d3213fb75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:24Z\\\",\\\"message\\\":\\\"Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0107 03:33:24.880795 6701 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:52Z\\\",\\\"message\\\":\\\"46444c4-trplf openshift-network-operator/iptables-alerter-4ln5h openshift-dns/node-resolver-nv5s5 openshift-kube-controller-manager/kube-controller-manager-crc openshift-multus/multus-9ct5r openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-node-identity/network-node-identity-vrzqb openshift-etcd/etcd-crc openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/kube-rbac-proxy-crio-crc openshift-machine-config-operator/machine-config-daemon-hzlt6 openshift-multus/multus-additional-cni-plugins-rwpf2 openshift-multus/network-metrics-daemon-j75z7 openshift-network-diagnostics/network-check-target-xd92c openshift-image-registry/node-ca-9pk7v]\\\\nI0107 03:33:52.337498 7096 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0107 03:33:52.337521 7096 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-9pk7v\\\\nF0107 03:33:52.337532 7096 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:03 crc kubenswrapper[4980]: I0107 03:34:03.998195 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j75z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e3c7945-f3cb-4af2-8a0f-19b014123f74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j75z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:03Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.014623 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"160280d9-c513-4102-9613-b3164459b751\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523d0471ddbf76f60815470ee971e8b259c45255a4e7fd7844ca9b258b4bcbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5aa770dddd3c34cae9a89f2fbbaf2180eb5806efb970bcb176c9948126c5934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5aa770dddd3c34cae9a89f2fbbaf2180eb5806efb970bcb176c9948126c5934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.034365 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c5f8b24cd9dff6094a430c5eb226259115e22ab47ce38eeabdc1d675a5c643\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:40Z\\\",\\\"message\\\":\\\"2026-01-07T03:32:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a7eec747-5882-4140-a693-ee9f6ac1a16d\\\\n2026-01-07T03:32:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a7eec747-5882-4140-a693-ee9f6ac1a16d to /host/opt/cni/bin/\\\\n2026-01-07T03:32:55Z [verbose] multus-daemon started\\\\n2026-01-07T03:32:55Z [verbose] Readiness Indicator file check\\\\n2026-01-07T03:33:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.050885 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.057438 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.057494 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.057514 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.057538 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.057596 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:04Z","lastTransitionTime":"2026-01-07T03:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.071454 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.094303 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.111606 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.129388 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.151110 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:04Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.159924 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.159977 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.159994 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.160020 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.160037 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:04Z","lastTransitionTime":"2026-01-07T03:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.263223 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.263279 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.263298 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.263320 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.263338 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:04Z","lastTransitionTime":"2026-01-07T03:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.366344 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.366410 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.366429 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.366453 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.366471 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:04Z","lastTransitionTime":"2026-01-07T03:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.469120 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.469178 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.469196 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.469221 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.469239 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:04Z","lastTransitionTime":"2026-01-07T03:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.572235 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.572315 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.572338 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.572365 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.572384 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:04Z","lastTransitionTime":"2026-01-07T03:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.675192 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.675271 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.675295 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.675325 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.675347 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:04Z","lastTransitionTime":"2026-01-07T03:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.735024 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.735098 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.735029 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:04 crc kubenswrapper[4980]: E0107 03:34:04.735196 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:04 crc kubenswrapper[4980]: E0107 03:34:04.735284 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:04 crc kubenswrapper[4980]: E0107 03:34:04.735415 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.778633 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.778710 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.778731 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.778764 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.778787 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:04Z","lastTransitionTime":"2026-01-07T03:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.882147 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.882207 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.882224 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.882249 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.882266 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:04Z","lastTransitionTime":"2026-01-07T03:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.984946 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.985003 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.985022 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.985047 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:04 crc kubenswrapper[4980]: I0107 03:34:04.985064 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:04Z","lastTransitionTime":"2026-01-07T03:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.087662 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.087727 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.087739 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.087761 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.087777 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:05Z","lastTransitionTime":"2026-01-07T03:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.190787 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.190861 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.190888 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.190928 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.190951 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:05Z","lastTransitionTime":"2026-01-07T03:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.293974 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.294047 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.294067 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.294092 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.294110 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:05Z","lastTransitionTime":"2026-01-07T03:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.397495 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.397607 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.397632 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.397662 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.397684 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:05Z","lastTransitionTime":"2026-01-07T03:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.501227 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.501300 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.501325 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.501357 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.501379 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:05Z","lastTransitionTime":"2026-01-07T03:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.604641 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.604773 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.604790 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.604814 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.604834 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:05Z","lastTransitionTime":"2026-01-07T03:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.708092 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.708160 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.708180 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.708205 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.708223 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:05Z","lastTransitionTime":"2026-01-07T03:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.734832 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:05 crc kubenswrapper[4980]: E0107 03:34:05.735267 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.811169 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.811271 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.811297 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.811328 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.811348 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:05Z","lastTransitionTime":"2026-01-07T03:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.914256 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.914330 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.914358 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.914390 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:05 crc kubenswrapper[4980]: I0107 03:34:05.914414 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:05Z","lastTransitionTime":"2026-01-07T03:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.017333 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.017390 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.017409 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.017434 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.017452 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:06Z","lastTransitionTime":"2026-01-07T03:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.119804 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.119874 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.119894 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.119921 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.119938 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:06Z","lastTransitionTime":"2026-01-07T03:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.223348 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.223482 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.223513 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.223640 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.223667 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:06Z","lastTransitionTime":"2026-01-07T03:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.327376 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.327431 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.327446 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.327470 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.327485 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:06Z","lastTransitionTime":"2026-01-07T03:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.430593 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.430666 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.430686 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.430713 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.430734 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:06Z","lastTransitionTime":"2026-01-07T03:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.534189 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.534249 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.534270 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.534300 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.534320 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:06Z","lastTransitionTime":"2026-01-07T03:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.638015 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.638077 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.638094 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.638118 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.638136 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:06Z","lastTransitionTime":"2026-01-07T03:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.735610 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.735633 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.736032 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:06 crc kubenswrapper[4980]: E0107 03:34:06.736328 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:06 crc kubenswrapper[4980]: E0107 03:34:06.736445 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.736627 4980 scope.go:117] "RemoveContainer" containerID="c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac" Jan 07 03:34:06 crc kubenswrapper[4980]: E0107 03:34:06.736682 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:06 crc kubenswrapper[4980]: E0107 03:34:06.736942 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.741511 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.741598 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.741624 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.741652 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.741674 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:06Z","lastTransitionTime":"2026-01-07T03:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.754120 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.775354 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.798528 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.817361 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.835824 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.844692 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.844748 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.844764 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.844790 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.844807 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:06Z","lastTransitionTime":"2026-01-07T03:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.856910 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.873618 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j75z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e3c7945-f3cb-4af2-8a0f-19b014123f74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j75z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.889302 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"160280d9-c513-4102-9613-b3164459b751\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523d0471ddbf76f60815470ee971e8b259c45255a4e7fd7844ca9b258b4bcbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5aa770dddd3c34cae9a89f2fbbaf2180eb5806efb970bcb176c9948126c5934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5aa770dddd3c34cae9a89f2fbbaf2180eb5806efb970bcb176c9948126c5934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.911781 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.928957 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a258cfd1-39e0-429e-9e0a-9b194eb37cc4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2a6fd6463030fc325a1d02660ee60568d07812763b17f0afff926c273155620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9146f4a2fc7846d5688c54a7c7508ef7a9b0982306cf09705764fcc74e3f9597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6a45bda4fadd983cfdd44b4bd0d30119c61a359a83e8380e97ae53ef2b4f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.946653 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.948151 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.948204 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.948221 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.948245 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.948262 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:06Z","lastTransitionTime":"2026-01-07T03:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.975856 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:52Z\\\",\\\"message\\\":\\\"46444c4-trplf openshift-network-operator/iptables-alerter-4ln5h openshift-dns/node-resolver-nv5s5 openshift-kube-controller-manager/kube-controller-manager-crc openshift-multus/multus-9ct5r openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-node-identity/network-node-identity-vrzqb openshift-etcd/etcd-crc openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/kube-rbac-proxy-crio-crc openshift-machine-config-operator/machine-config-daemon-hzlt6 openshift-multus/multus-additional-cni-plugins-rwpf2 openshift-multus/network-metrics-daemon-j75z7 openshift-network-diagnostics/network-check-target-xd92c openshift-image-registry/node-ca-9pk7v]\\\\nI0107 03:33:52.337498 7096 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0107 03:33:52.337521 7096 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-9pk7v\\\\nF0107 03:33:52.337532 7096 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:06 crc kubenswrapper[4980]: I0107 03:34:06.995109 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:06Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.014804 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c5f8b24cd9dff6094a430c5eb226259115e22ab47ce38eeabdc1d675a5c643\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:40Z\\\",\\\"message\\\":\\\"2026-01-07T03:32:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a7eec747-5882-4140-a693-ee9f6ac1a16d\\\\n2026-01-07T03:32:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a7eec747-5882-4140-a693-ee9f6ac1a16d to /host/opt/cni/bin/\\\\n2026-01-07T03:32:55Z [verbose] multus-daemon started\\\\n2026-01-07T03:32:55Z [verbose] Readiness Indicator file check\\\\n2026-01-07T03:33:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.027626 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.042669 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.051181 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.051242 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.051259 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.051284 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.051300 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:07Z","lastTransitionTime":"2026-01-07T03:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.062871 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.076978 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.092543 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:07Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.153987 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.154030 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.154042 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.154057 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.154066 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:07Z","lastTransitionTime":"2026-01-07T03:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.257220 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.257276 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.257292 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.257323 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.257340 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:07Z","lastTransitionTime":"2026-01-07T03:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.360295 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.360358 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.360374 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.360399 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.360420 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:07Z","lastTransitionTime":"2026-01-07T03:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.463873 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.463938 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.463959 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.463990 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.464012 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:07Z","lastTransitionTime":"2026-01-07T03:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.567106 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.567164 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.567180 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.567204 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.567224 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:07Z","lastTransitionTime":"2026-01-07T03:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.670583 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.670614 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.670622 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.670635 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.670644 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:07Z","lastTransitionTime":"2026-01-07T03:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.735054 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:07 crc kubenswrapper[4980]: E0107 03:34:07.735297 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.772870 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.772948 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.772967 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.772991 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.773009 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:07Z","lastTransitionTime":"2026-01-07T03:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.876521 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.876621 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.876638 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.876666 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.876685 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:07Z","lastTransitionTime":"2026-01-07T03:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.979348 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.979416 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.979434 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.979458 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:07 crc kubenswrapper[4980]: I0107 03:34:07.979476 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:07Z","lastTransitionTime":"2026-01-07T03:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.082606 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.082669 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.082685 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.082710 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.082728 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:08Z","lastTransitionTime":"2026-01-07T03:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.185746 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.185819 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.185843 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.185874 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.185896 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:08Z","lastTransitionTime":"2026-01-07T03:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.288915 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.288986 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.289006 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.289036 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.289058 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:08Z","lastTransitionTime":"2026-01-07T03:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.391538 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.391656 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.391676 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.391703 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.391723 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:08Z","lastTransitionTime":"2026-01-07T03:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.494485 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.494545 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.494601 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.494628 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.494648 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:08Z","lastTransitionTime":"2026-01-07T03:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.597248 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.597304 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.597316 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.597332 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.597344 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:08Z","lastTransitionTime":"2026-01-07T03:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.701135 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.701202 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.701221 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.701248 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.701268 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:08Z","lastTransitionTime":"2026-01-07T03:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.735048 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.735061 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:08 crc kubenswrapper[4980]: E0107 03:34:08.735275 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.735074 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:08 crc kubenswrapper[4980]: E0107 03:34:08.735387 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:08 crc kubenswrapper[4980]: E0107 03:34:08.735677 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.796197 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.796260 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.796273 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.796293 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.796303 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:08Z","lastTransitionTime":"2026-01-07T03:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:08 crc kubenswrapper[4980]: E0107 03:34:08.813243 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.817044 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.817109 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.817126 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.817153 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.817171 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:08Z","lastTransitionTime":"2026-01-07T03:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:08 crc kubenswrapper[4980]: E0107 03:34:08.834520 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.839100 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.839150 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.839175 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.839205 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.839230 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:08Z","lastTransitionTime":"2026-01-07T03:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:08 crc kubenswrapper[4980]: E0107 03:34:08.857930 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.862700 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.862762 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.862782 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.862806 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.862823 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:08Z","lastTransitionTime":"2026-01-07T03:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:08 crc kubenswrapper[4980]: E0107 03:34:08.880617 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.885481 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.885586 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.885616 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.885649 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.885670 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:08Z","lastTransitionTime":"2026-01-07T03:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:08 crc kubenswrapper[4980]: E0107 03:34:08.904789 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:08Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:08 crc kubenswrapper[4980]: E0107 03:34:08.905057 4980 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.907075 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.907143 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.907161 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.907186 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:08 crc kubenswrapper[4980]: I0107 03:34:08.907204 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:08Z","lastTransitionTime":"2026-01-07T03:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.009525 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.009627 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.009654 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.009684 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.009706 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:09Z","lastTransitionTime":"2026-01-07T03:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.112728 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.112779 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.112798 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.112822 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.112839 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:09Z","lastTransitionTime":"2026-01-07T03:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.214900 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.214943 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.214957 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.214974 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.214991 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:09Z","lastTransitionTime":"2026-01-07T03:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.318434 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.318516 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.318540 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.318608 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.318635 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:09Z","lastTransitionTime":"2026-01-07T03:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.422343 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.422405 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.422432 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.422462 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.422485 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:09Z","lastTransitionTime":"2026-01-07T03:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.525863 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.525930 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.525954 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.525985 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.526008 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:09Z","lastTransitionTime":"2026-01-07T03:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.628918 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.628977 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.628995 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.629022 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.629046 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:09Z","lastTransitionTime":"2026-01-07T03:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.732527 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.732622 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.732644 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.732669 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.732688 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:09Z","lastTransitionTime":"2026-01-07T03:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.734827 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:09 crc kubenswrapper[4980]: E0107 03:34:09.735377 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.836216 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.836293 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.836316 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.836397 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.836903 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:09Z","lastTransitionTime":"2026-01-07T03:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.940765 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.940848 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.940873 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.940903 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:09 crc kubenswrapper[4980]: I0107 03:34:09.940927 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:09Z","lastTransitionTime":"2026-01-07T03:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.043657 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.043714 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.043737 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.043765 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.043786 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:10Z","lastTransitionTime":"2026-01-07T03:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.146286 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.146351 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.146369 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.146395 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.146414 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:10Z","lastTransitionTime":"2026-01-07T03:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.249910 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.249967 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.249984 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.250007 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.250024 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:10Z","lastTransitionTime":"2026-01-07T03:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.352539 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.352639 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.352655 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.352680 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.352698 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:10Z","lastTransitionTime":"2026-01-07T03:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.455448 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.455502 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.455520 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.455543 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.455596 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:10Z","lastTransitionTime":"2026-01-07T03:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.558515 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.558609 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.558628 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.558652 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.558668 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:10Z","lastTransitionTime":"2026-01-07T03:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.661926 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.662045 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.662062 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.662085 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.662103 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:10Z","lastTransitionTime":"2026-01-07T03:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.735084 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.735164 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:10 crc kubenswrapper[4980]: E0107 03:34:10.735272 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.735372 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:10 crc kubenswrapper[4980]: E0107 03:34:10.735624 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:10 crc kubenswrapper[4980]: E0107 03:34:10.735998 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.765439 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.765507 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.765527 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.765586 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.765608 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:10Z","lastTransitionTime":"2026-01-07T03:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.869165 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.869266 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.869320 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.869349 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.869373 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:10Z","lastTransitionTime":"2026-01-07T03:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.972672 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.972764 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.972781 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.972807 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:10 crc kubenswrapper[4980]: I0107 03:34:10.972825 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:10Z","lastTransitionTime":"2026-01-07T03:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.076685 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.076750 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.076769 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.076794 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.076813 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:11Z","lastTransitionTime":"2026-01-07T03:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.180525 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.180613 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.180632 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.180656 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.180674 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:11Z","lastTransitionTime":"2026-01-07T03:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.283453 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.283502 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.283518 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.283539 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.283580 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:11Z","lastTransitionTime":"2026-01-07T03:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.386606 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.386661 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.386672 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.386689 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.386702 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:11Z","lastTransitionTime":"2026-01-07T03:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.489292 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.489356 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.489379 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.489410 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.489437 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:11Z","lastTransitionTime":"2026-01-07T03:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.591878 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.591938 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.591957 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.592011 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.592028 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:11Z","lastTransitionTime":"2026-01-07T03:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.695336 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.695394 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.695410 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.695437 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.695454 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:11Z","lastTransitionTime":"2026-01-07T03:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.735734 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:11 crc kubenswrapper[4980]: E0107 03:34:11.735982 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.797938 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.798177 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.798194 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.798218 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.798235 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:11Z","lastTransitionTime":"2026-01-07T03:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.901793 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.901903 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.901923 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.901956 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:11 crc kubenswrapper[4980]: I0107 03:34:11.901975 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:11Z","lastTransitionTime":"2026-01-07T03:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.004802 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.004862 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.004880 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.004904 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.004924 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:12Z","lastTransitionTime":"2026-01-07T03:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.108282 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.108334 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.108344 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.108360 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.108371 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:12Z","lastTransitionTime":"2026-01-07T03:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.136340 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs\") pod \"network-metrics-daemon-j75z7\" (UID: \"1e3c7945-f3cb-4af2-8a0f-19b014123f74\") " pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:12 crc kubenswrapper[4980]: E0107 03:34:12.136504 4980 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 03:34:12 crc kubenswrapper[4980]: E0107 03:34:12.136594 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs podName:1e3c7945-f3cb-4af2-8a0f-19b014123f74 nodeName:}" failed. No retries permitted until 2026-01-07 03:35:16.136547867 +0000 UTC m=+162.702242602 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs") pod "network-metrics-daemon-j75z7" (UID: "1e3c7945-f3cb-4af2-8a0f-19b014123f74") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.211149 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.211215 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.211236 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.211261 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.211279 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:12Z","lastTransitionTime":"2026-01-07T03:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.314509 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.314593 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.314611 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.314636 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.314654 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:12Z","lastTransitionTime":"2026-01-07T03:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.417817 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.417869 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.417886 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.417906 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.417920 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:12Z","lastTransitionTime":"2026-01-07T03:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.520849 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.520907 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.520926 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.520953 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.520974 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:12Z","lastTransitionTime":"2026-01-07T03:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.624457 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.624510 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.624520 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.624812 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.624829 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:12Z","lastTransitionTime":"2026-01-07T03:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.727809 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.727867 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.727878 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.727900 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.727916 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:12Z","lastTransitionTime":"2026-01-07T03:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.735363 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.735360 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.735494 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:12 crc kubenswrapper[4980]: E0107 03:34:12.735619 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:12 crc kubenswrapper[4980]: E0107 03:34:12.735817 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:12 crc kubenswrapper[4980]: E0107 03:34:12.736251 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.830846 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.830907 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.830920 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.830944 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.830956 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:12Z","lastTransitionTime":"2026-01-07T03:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.933915 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.933981 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.934000 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.934024 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:12 crc kubenswrapper[4980]: I0107 03:34:12.934045 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:12Z","lastTransitionTime":"2026-01-07T03:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.037385 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.037434 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.037444 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.037458 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.037469 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:13Z","lastTransitionTime":"2026-01-07T03:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.141040 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.141105 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.141123 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.141153 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.141169 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:13Z","lastTransitionTime":"2026-01-07T03:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.244298 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.244336 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.244345 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.244360 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.244368 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:13Z","lastTransitionTime":"2026-01-07T03:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.347368 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.347456 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.347482 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.347510 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.347529 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:13Z","lastTransitionTime":"2026-01-07T03:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.450371 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.450436 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.450453 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.450477 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.450494 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:13Z","lastTransitionTime":"2026-01-07T03:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.553318 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.553391 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.553409 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.553434 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.553453 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:13Z","lastTransitionTime":"2026-01-07T03:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.656471 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.656543 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.656596 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.656628 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.656646 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:13Z","lastTransitionTime":"2026-01-07T03:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.735588 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:13 crc kubenswrapper[4980]: E0107 03:34:13.735804 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.760064 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.760224 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.760256 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.760344 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.760365 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:13Z","lastTransitionTime":"2026-01-07T03:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.768696 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c962a95-c8ed-4d65-810e-1da967416c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:52Z\\\",\\\"message\\\":\\\"46444c4-trplf openshift-network-operator/iptables-alerter-4ln5h openshift-dns/node-resolver-nv5s5 openshift-kube-controller-manager/kube-controller-manager-crc openshift-multus/multus-9ct5r openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-node-identity/network-node-identity-vrzqb openshift-etcd/etcd-crc openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/kube-rbac-proxy-crio-crc openshift-machine-config-operator/machine-config-daemon-hzlt6 openshift-multus/multus-additional-cni-plugins-rwpf2 openshift-multus/network-metrics-daemon-j75z7 openshift-network-diagnostics/network-check-target-xd92c openshift-image-registry/node-ca-9pk7v]\\\\nI0107 03:33:52.337498 7096 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0107 03:33:52.337521 7096 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-9pk7v\\\\nF0107 03:33:52.337532 7096 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:33:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzxmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5n7sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.784775 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j75z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e3c7945-f3cb-4af2-8a0f-19b014123f74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtlz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j75z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.799498 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"160280d9-c513-4102-9613-b3164459b751\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523d0471ddbf76f60815470ee971e8b259c45255a4e7fd7844ca9b258b4bcbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5aa770dddd3c34cae9a89f2fbbaf2180eb5806efb970bcb176c9948126c5934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5aa770dddd3c34cae9a89f2fbbaf2180eb5806efb970bcb176c9948126c5934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.830141 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51fa7292-bb80-4749-a0f8-6a8a46784604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197a59b85ba77699389df9ef8bd799466176216fd816459004b6b22d727d1fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7850b054319082196f1683afb11e9d4e402fbe2c62ae14180536e65dc40560a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00f68e216aa47d58b25de11db2a0e2910798ea049dfd91f8a54c55c72ea9fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47faa85f8f52a30d9cb2d02ab8c93c4b2bdd07a12de842e74c2d92a4e50edfa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1aff1284e1fa7b5c07bd3449a19746401a153eeba92f959115b3ca70718e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29c7bdee481cae64c82ebeeb1c542d18fef41071dbd1d6e080a8bbe35038933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db47bd8619f34c81035fd8607acb8e61594795a1b3bd43685d814499c3ade4da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a086acbd5fda241b6fd65afa83e1391dddd8bbde3ecfc07d33a2ecdd3f4c955c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.850998 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a258cfd1-39e0-429e-9e0a-9b194eb37cc4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2a6fd6463030fc325a1d02660ee60568d07812763b17f0afff926c273155620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9146f4a2fc7846d5688c54a7c7508ef7a9b0982306cf09705764fcc74e3f9597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6a45bda4fadd983cfdd44b4bd0d30119c61a359a83e8380e97ae53ef2b4f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df71f4b3b7a02831bdf162d694df267710d604037d39a461438bb54b515f977\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.862638 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.862679 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.862693 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.862712 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.862726 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:13Z","lastTransitionTime":"2026-01-07T03:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.870123 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bd916c060570bbb83e3836b5fbd0526634c3abf2898cf4327730cc233976161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.887952 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29f7b23c1eebd32257b6bca5397dfef632f463a9d40e463fe0084550d34b85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195699f051067da55bb75e7c0002395f5f0076119d45647faf30808edffd0260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.909355 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9ct5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3e552e-9608-4577-86c3-5f7573ef22f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c5f8b24cd9dff6094a430c5eb226259115e22ab47ce38eeabdc1d675a5c643\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-07T03:33:40Z\\\",\\\"message\\\":\\\"2026-01-07T03:32:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a7eec747-5882-4140-a693-ee9f6ac1a16d\\\\n2026-01-07T03:32:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a7eec747-5882-4140-a693-ee9f6ac1a16d to /host/opt/cni/bin/\\\\n2026-01-07T03:32:55Z [verbose] multus-daemon started\\\\n2026-01-07T03:32:55Z [verbose] Readiness Indicator file check\\\\n2026-01-07T03:33:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9htm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9ct5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.926614 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dab971c4391b9eb2e49adc41e088bdc2253c4d26704a7b30bbb4783e747a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzlt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.965899 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.965964 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.965985 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.966022 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.966043 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:13Z","lastTransitionTime":"2026-01-07T03:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:13 crc kubenswrapper[4980]: I0107 03:34:13.983584 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8350df-0578-466d-a505-54f93e6365e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0107 03:32:46.306541 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0107 03:32:46.308284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3947479587/tls.crt::/tmp/serving-cert-3947479587/tls.key\\\\\\\"\\\\nI0107 03:32:52.045643 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 03:32:52.052461 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 03:32:52.052490 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 03:32:52.052525 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 03:32:52.052536 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 03:32:52.083542 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0107 03:32:52.083647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 03:32:52.083669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083696 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 03:32:52.083721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 03:32:52.083725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 03:32:52.083729 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 03:32:52.083734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 03:32:52.086535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:13Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.007649 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03d0f597-1e90-409f-8345-b641cb7342ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02297c82183907349322166711c31005bd572679ecff005a446d8c60974d50f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8f4b61685911e54b93681fc6fa7474e82e38f3a7815e67319ca4b83a1d6bd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99dd226bc468e4cc9a8f663580c09e07919dd84733a2a85e67ed8a59298fcd67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878485e3fa65d092d0d4961bac125b724b3ba002da625c438ee31c89dac6f41d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7514f4e2b7f1198517e7f2ecc29a754d710e17d9f6c25eacf0ea3e62e7af18d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed37d42a95aec6f53b04ee0c01b3002cb35d1694c72b7732151c0d94f55443a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee9a57132819f94b332e7eb9da02747b570dde94408ecd0314c756daa1d76d6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T03:32:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T03:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwpf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:14Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.024390 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9pk7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec078f4e-8312-4e23-a374-8da01dfc253a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9d28ff3f87f041408f1236fb0667ad90c522808d5708e42e32455f4f1ecc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdx9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9pk7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:14Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.041613 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f887efc-b87f-4d3c-a077-f0c083487518\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:33:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6043a6cbad02b83c5fb492855300c03d5316c8a796916a632b3bdb2bd8029481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4f05957ddc2935b00ff568ffe2f9da16ae8ef95092062745b066d22568fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:33:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfrdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:33:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tqvct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:14Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.061054 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:14Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.069769 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.069835 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.069859 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.069889 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.069912 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:14Z","lastTransitionTime":"2026-01-07T03:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.079630 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nv5s5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e314feea-3256-447b-8f15-50ffcefd4d38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6914ed5d22fb1f6718354350dc3ee5435e4637cc3921850dbdb629fb01b9ecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4mz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nv5s5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:14Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.101782 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09c5b95e-7094-4e2c-adeb-cbf27b70063b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b349fcc45e1970dfba12096b77f2716efa52505d8042d5fabd7375ccd376ba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c559705bd24f2ab87e2f4828012c2b3e1e0843c3dd2b7369deb813c17e5499bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6783d6520527d8393ea5a71db61edf7dab40a9d3c5f61162fd079291e0c3b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T03:32:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:14Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.123818 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7cc8c5c37d2ab5fb457adf24c9051fb7c3dd58ea545d477432aa2ddf8a26411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T03:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:14Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.142740 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:14Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.162888 4980 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T03:32:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:14Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.172612 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.172667 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.172686 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.172712 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.172729 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:14Z","lastTransitionTime":"2026-01-07T03:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.275481 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.275536 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.275586 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.275606 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.275622 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:14Z","lastTransitionTime":"2026-01-07T03:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.381523 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.381629 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.381649 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.381675 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.381700 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:14Z","lastTransitionTime":"2026-01-07T03:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.484512 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.484931 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.485133 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.485379 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.485624 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:14Z","lastTransitionTime":"2026-01-07T03:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.589037 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.589107 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.589126 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.589152 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.589169 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:14Z","lastTransitionTime":"2026-01-07T03:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.691632 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.691689 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.691707 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.691729 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.691745 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:14Z","lastTransitionTime":"2026-01-07T03:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.734715 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.734750 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:14 crc kubenswrapper[4980]: E0107 03:34:14.735252 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.734750 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:14 crc kubenswrapper[4980]: E0107 03:34:14.735771 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:14 crc kubenswrapper[4980]: E0107 03:34:14.735439 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.793958 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.794372 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.794507 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.794704 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.794827 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:14Z","lastTransitionTime":"2026-01-07T03:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.897631 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.897702 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.897727 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.897759 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:14 crc kubenswrapper[4980]: I0107 03:34:14.897833 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:14Z","lastTransitionTime":"2026-01-07T03:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.000720 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.001143 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.001292 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.001435 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.001602 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:15Z","lastTransitionTime":"2026-01-07T03:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.104109 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.104160 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.104177 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.104205 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.104228 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:15Z","lastTransitionTime":"2026-01-07T03:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.207091 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.207609 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.207786 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.207951 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.208177 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:15Z","lastTransitionTime":"2026-01-07T03:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.311361 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.311419 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.311438 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.311466 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.311485 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:15Z","lastTransitionTime":"2026-01-07T03:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.414879 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.414934 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.414945 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.414966 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.414977 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:15Z","lastTransitionTime":"2026-01-07T03:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.517616 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.517713 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.517733 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.517788 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.517807 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:15Z","lastTransitionTime":"2026-01-07T03:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.620327 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.620399 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.620418 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.620457 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.620473 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:15Z","lastTransitionTime":"2026-01-07T03:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.723996 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.724111 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.724135 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.724228 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.724292 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:15Z","lastTransitionTime":"2026-01-07T03:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.734881 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:15 crc kubenswrapper[4980]: E0107 03:34:15.735408 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.827328 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.827407 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.827488 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.827516 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.827536 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:15Z","lastTransitionTime":"2026-01-07T03:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.929949 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.930020 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.930042 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.930074 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:15 crc kubenswrapper[4980]: I0107 03:34:15.930096 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:15Z","lastTransitionTime":"2026-01-07T03:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.032849 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.032911 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.032930 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.032954 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.032972 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:16Z","lastTransitionTime":"2026-01-07T03:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.136935 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.137000 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.137020 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.137045 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.137063 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:16Z","lastTransitionTime":"2026-01-07T03:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.240401 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.240476 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.240495 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.240522 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.240543 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:16Z","lastTransitionTime":"2026-01-07T03:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.343214 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.343297 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.343326 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.343359 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.343382 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:16Z","lastTransitionTime":"2026-01-07T03:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.446079 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.446143 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.446161 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.446186 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.446207 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:16Z","lastTransitionTime":"2026-01-07T03:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.549706 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.549774 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.549796 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.549828 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.549853 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:16Z","lastTransitionTime":"2026-01-07T03:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.652981 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.653049 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.653071 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.653096 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.653117 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:16Z","lastTransitionTime":"2026-01-07T03:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.735264 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.735393 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:16 crc kubenswrapper[4980]: E0107 03:34:16.735465 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.735487 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:16 crc kubenswrapper[4980]: E0107 03:34:16.735593 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:16 crc kubenswrapper[4980]: E0107 03:34:16.735720 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.756275 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.756343 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.756363 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.756392 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.756412 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:16Z","lastTransitionTime":"2026-01-07T03:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.860345 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.860408 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.860426 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.860451 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.860471 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:16Z","lastTransitionTime":"2026-01-07T03:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.963795 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.963904 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.963922 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.963940 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:16 crc kubenswrapper[4980]: I0107 03:34:16.963953 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:16Z","lastTransitionTime":"2026-01-07T03:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.066928 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.067015 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.067033 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.067054 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.067071 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:17Z","lastTransitionTime":"2026-01-07T03:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.170546 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.170651 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.170664 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.170683 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.170696 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:17Z","lastTransitionTime":"2026-01-07T03:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.273407 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.273513 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.273531 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.273580 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.273602 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:17Z","lastTransitionTime":"2026-01-07T03:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.376740 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.376812 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.376825 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.376840 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.376852 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:17Z","lastTransitionTime":"2026-01-07T03:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.480936 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.480995 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.481013 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.481035 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.481053 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:17Z","lastTransitionTime":"2026-01-07T03:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.584147 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.584207 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.584225 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.584250 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.584270 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:17Z","lastTransitionTime":"2026-01-07T03:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.687399 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.687462 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.687484 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.687510 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.687528 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:17Z","lastTransitionTime":"2026-01-07T03:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.734898 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:17 crc kubenswrapper[4980]: E0107 03:34:17.735482 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.735930 4980 scope.go:117] "RemoveContainer" containerID="c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac" Jan 07 03:34:17 crc kubenswrapper[4980]: E0107 03:34:17.736162 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.790435 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.790497 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.790509 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.790527 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.790539 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:17Z","lastTransitionTime":"2026-01-07T03:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.893292 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.893353 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.893370 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.893393 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.893410 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:17Z","lastTransitionTime":"2026-01-07T03:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.995733 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.995793 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.995812 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.995836 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:17 crc kubenswrapper[4980]: I0107 03:34:17.995853 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:17Z","lastTransitionTime":"2026-01-07T03:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.099043 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.099109 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.099128 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.099161 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.099181 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:18Z","lastTransitionTime":"2026-01-07T03:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.202816 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.202888 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.202907 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.202938 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.202955 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:18Z","lastTransitionTime":"2026-01-07T03:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.306396 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.306479 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.306497 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.306522 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.306542 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:18Z","lastTransitionTime":"2026-01-07T03:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.410969 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.411058 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.411076 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.411105 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.411153 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:18Z","lastTransitionTime":"2026-01-07T03:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.514341 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.514382 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.514392 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.514409 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.514420 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:18Z","lastTransitionTime":"2026-01-07T03:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.617814 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.617876 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.617894 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.617919 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.617987 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:18Z","lastTransitionTime":"2026-01-07T03:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.721020 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.721081 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.721101 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.721125 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.721142 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:18Z","lastTransitionTime":"2026-01-07T03:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.734688 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.734796 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:18 crc kubenswrapper[4980]: E0107 03:34:18.734860 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.734717 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:18 crc kubenswrapper[4980]: E0107 03:34:18.735136 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:18 crc kubenswrapper[4980]: E0107 03:34:18.735012 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.824162 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.824220 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.824238 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.824262 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.824279 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:18Z","lastTransitionTime":"2026-01-07T03:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.927357 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.927472 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.927494 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.927517 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:18 crc kubenswrapper[4980]: I0107 03:34:18.927534 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:18Z","lastTransitionTime":"2026-01-07T03:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.030238 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.030294 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.030312 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.030337 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.030354 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:19Z","lastTransitionTime":"2026-01-07T03:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.074106 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.074181 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.074200 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.074222 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.074239 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:19Z","lastTransitionTime":"2026-01-07T03:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:19 crc kubenswrapper[4980]: E0107 03:34:19.089455 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:19Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.093196 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.093244 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.093261 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.093320 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.093340 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:19Z","lastTransitionTime":"2026-01-07T03:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:19 crc kubenswrapper[4980]: E0107 03:34:19.115511 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:19Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.120314 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.120385 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.120411 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.120441 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.120463 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:19Z","lastTransitionTime":"2026-01-07T03:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:19 crc kubenswrapper[4980]: E0107 03:34:19.140215 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:19Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.145003 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.145043 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.145064 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.145089 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.145109 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:19Z","lastTransitionTime":"2026-01-07T03:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:19 crc kubenswrapper[4980]: E0107 03:34:19.163766 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:19Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.168621 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.168662 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.168681 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.168702 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.168718 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:19Z","lastTransitionTime":"2026-01-07T03:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:19 crc kubenswrapper[4980]: E0107 03:34:19.188823 4980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T03:34:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"faa7e186-0b6e-43ad-a16a-d507c499b170\\\",\\\"systemUUID\\\":\\\"9c9b768a-7681-4a73-9b43-d778a3c82c46\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-07T03:34:19Z is after 2025-08-24T17:21:41Z" Jan 07 03:34:19 crc kubenswrapper[4980]: E0107 03:34:19.189096 4980 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.191190 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.191246 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.191263 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.191290 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.191309 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:19Z","lastTransitionTime":"2026-01-07T03:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.293925 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.293990 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.294012 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.294044 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.294066 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:19Z","lastTransitionTime":"2026-01-07T03:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.397194 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.397250 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.397264 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.397285 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.397299 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:19Z","lastTransitionTime":"2026-01-07T03:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.500004 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.500057 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.500074 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.500096 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.500112 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:19Z","lastTransitionTime":"2026-01-07T03:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.603241 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.603297 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.603313 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.603336 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.603351 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:19Z","lastTransitionTime":"2026-01-07T03:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.706074 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.706131 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.706146 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.706166 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.706185 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:19Z","lastTransitionTime":"2026-01-07T03:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.734902 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:19 crc kubenswrapper[4980]: E0107 03:34:19.735102 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.809251 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.809310 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.809330 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.809353 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.809370 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:19Z","lastTransitionTime":"2026-01-07T03:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.912163 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.912220 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.912236 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.912260 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:19 crc kubenswrapper[4980]: I0107 03:34:19.912276 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:19Z","lastTransitionTime":"2026-01-07T03:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.015304 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.015367 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.015384 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.015413 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.015433 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:20Z","lastTransitionTime":"2026-01-07T03:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.119037 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.119100 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.119124 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.119157 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.119176 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:20Z","lastTransitionTime":"2026-01-07T03:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.222596 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.222656 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.222674 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.222694 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.222712 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:20Z","lastTransitionTime":"2026-01-07T03:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.326550 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.326680 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.326709 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.326739 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.326763 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:20Z","lastTransitionTime":"2026-01-07T03:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.430438 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.430507 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.430529 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.430600 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.430627 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:20Z","lastTransitionTime":"2026-01-07T03:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.533323 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.533386 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.533408 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.533437 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.533458 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:20Z","lastTransitionTime":"2026-01-07T03:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.636869 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.636978 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.636996 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.637019 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.637034 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:20Z","lastTransitionTime":"2026-01-07T03:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.734744 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.734813 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.734924 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:20 crc kubenswrapper[4980]: E0107 03:34:20.735127 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:20 crc kubenswrapper[4980]: E0107 03:34:20.735507 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:20 crc kubenswrapper[4980]: E0107 03:34:20.735660 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.739865 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.739918 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.739935 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.739956 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.739972 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:20Z","lastTransitionTime":"2026-01-07T03:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.843798 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.843845 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.843862 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.843884 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.843901 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:20Z","lastTransitionTime":"2026-01-07T03:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.946897 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.947015 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.947043 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.947073 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:20 crc kubenswrapper[4980]: I0107 03:34:20.947097 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:20Z","lastTransitionTime":"2026-01-07T03:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.050014 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.050079 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.050099 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.050124 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.050140 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:21Z","lastTransitionTime":"2026-01-07T03:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.153851 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.153915 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.153934 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.153963 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.153981 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:21Z","lastTransitionTime":"2026-01-07T03:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.258582 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.258666 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.258689 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.258722 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.258761 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:21Z","lastTransitionTime":"2026-01-07T03:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.360818 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.360865 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.360878 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.360894 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.360906 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:21Z","lastTransitionTime":"2026-01-07T03:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.463391 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.463424 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.463435 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.463470 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.463483 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:21Z","lastTransitionTime":"2026-01-07T03:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.566337 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.566392 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.566409 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.566430 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.566446 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:21Z","lastTransitionTime":"2026-01-07T03:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.669607 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.669676 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.669693 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.669718 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.669737 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:21Z","lastTransitionTime":"2026-01-07T03:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.735445 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:21 crc kubenswrapper[4980]: E0107 03:34:21.735681 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.772968 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.773025 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.773044 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.773072 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.773091 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:21Z","lastTransitionTime":"2026-01-07T03:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.876423 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.876487 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.876506 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.876530 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.876549 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:21Z","lastTransitionTime":"2026-01-07T03:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.979925 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.979981 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.979998 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.980020 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:21 crc kubenswrapper[4980]: I0107 03:34:21.980039 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:21Z","lastTransitionTime":"2026-01-07T03:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.083333 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.083408 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.083427 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.083453 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.083475 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:22Z","lastTransitionTime":"2026-01-07T03:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.186711 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.186888 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.186915 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.186945 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.186970 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:22Z","lastTransitionTime":"2026-01-07T03:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.289603 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.289676 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.289702 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.289731 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.289751 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:22Z","lastTransitionTime":"2026-01-07T03:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.392755 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.392831 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.392849 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.392876 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.392894 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:22Z","lastTransitionTime":"2026-01-07T03:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.496248 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.496310 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.496327 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.496349 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.496367 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:22Z","lastTransitionTime":"2026-01-07T03:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.598679 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.598736 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.598756 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.598780 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.598797 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:22Z","lastTransitionTime":"2026-01-07T03:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.701738 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.701799 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.701817 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.701839 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.701866 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:22Z","lastTransitionTime":"2026-01-07T03:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.735206 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.735219 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.735385 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:22 crc kubenswrapper[4980]: E0107 03:34:22.735550 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:22 crc kubenswrapper[4980]: E0107 03:34:22.735828 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:22 crc kubenswrapper[4980]: E0107 03:34:22.735987 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.805403 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.805535 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.805611 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.805650 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.805675 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:22Z","lastTransitionTime":"2026-01-07T03:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.908070 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.908116 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.908132 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.908155 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:22 crc kubenswrapper[4980]: I0107 03:34:22.908170 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:22Z","lastTransitionTime":"2026-01-07T03:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.011757 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.011836 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.011858 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.011887 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.011907 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:23Z","lastTransitionTime":"2026-01-07T03:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.114921 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.114986 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.115037 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.115065 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.115080 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:23Z","lastTransitionTime":"2026-01-07T03:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.218874 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.218940 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.218960 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.218983 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.219000 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:23Z","lastTransitionTime":"2026-01-07T03:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.322733 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.322798 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.322816 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.322841 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.322860 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:23Z","lastTransitionTime":"2026-01-07T03:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.426322 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.426382 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.426435 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.426469 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.426487 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:23Z","lastTransitionTime":"2026-01-07T03:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.529535 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.529646 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.529669 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.529698 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.529717 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:23Z","lastTransitionTime":"2026-01-07T03:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.632445 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.632511 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.632530 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.632605 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.632629 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:23Z","lastTransitionTime":"2026-01-07T03:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.734713 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:23 crc kubenswrapper[4980]: E0107 03:34:23.734902 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.736873 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.736939 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.736951 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.736992 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.737008 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:23Z","lastTransitionTime":"2026-01-07T03:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.797627 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=91.797604674 podStartE2EDuration="1m31.797604674s" podCreationTimestamp="2026-01-07 03:32:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:23.776702645 +0000 UTC m=+110.342397420" watchObservedRunningTime="2026-01-07 03:34:23.797604674 +0000 UTC m=+110.363299449" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.840979 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.841383 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.841535 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.841720 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.841855 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:23Z","lastTransitionTime":"2026-01-07T03:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.892936 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-nv5s5" podStartSLOduration=90.892908449 podStartE2EDuration="1m30.892908449s" podCreationTimestamp="2026-01-07 03:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:23.890592828 +0000 UTC m=+110.456287573" watchObservedRunningTime="2026-01-07 03:34:23.892908449 +0000 UTC m=+110.458603224" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.945372 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.945451 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.945470 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.945499 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.945520 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:23Z","lastTransitionTime":"2026-01-07T03:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.949472 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=36.949451338 podStartE2EDuration="36.949451338s" podCreationTimestamp="2026-01-07 03:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:23.907237297 +0000 UTC m=+110.472932072" watchObservedRunningTime="2026-01-07 03:34:23.949451338 +0000 UTC m=+110.515146103" Jan 07 03:34:23 crc kubenswrapper[4980]: I0107 03:34:23.973156 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=89.973122702 podStartE2EDuration="1m29.973122702s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:23.949727717 +0000 UTC m=+110.515422492" watchObservedRunningTime="2026-01-07 03:34:23.973122702 +0000 UTC m=+110.538817467" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.030791 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.030764265 podStartE2EDuration="58.030764265s" podCreationTimestamp="2026-01-07 03:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:23.973398111 +0000 UTC m=+110.539092916" watchObservedRunningTime="2026-01-07 03:34:24.030764265 +0000 UTC m=+110.596459060" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.049446 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.049504 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.049522 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.049548 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.049616 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:24Z","lastTransitionTime":"2026-01-07T03:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.136545 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9ct5r" podStartSLOduration=91.136517659 podStartE2EDuration="1m31.136517659s" podCreationTimestamp="2026-01-07 03:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:24.136329803 +0000 UTC m=+110.702024578" watchObservedRunningTime="2026-01-07 03:34:24.136517659 +0000 UTC m=+110.702212434" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.152424 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.152480 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.152498 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.152522 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.152539 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:24Z","lastTransitionTime":"2026-01-07T03:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.155657 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podStartSLOduration=91.155637654 podStartE2EDuration="1m31.155637654s" podCreationTimestamp="2026-01-07 03:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:24.155282903 +0000 UTC m=+110.720977668" watchObservedRunningTime="2026-01-07 03:34:24.155637654 +0000 UTC m=+110.721332419" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.190834 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=92.190786579 podStartE2EDuration="1m32.190786579s" podCreationTimestamp="2026-01-07 03:32:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:24.187069765 +0000 UTC m=+110.752764580" watchObservedRunningTime="2026-01-07 03:34:24.190786579 +0000 UTC m=+110.756481344" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.221108 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rwpf2" podStartSLOduration=91.221082295 podStartE2EDuration="1m31.221082295s" podCreationTimestamp="2026-01-07 03:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:24.219076604 +0000 UTC m=+110.784771379" watchObservedRunningTime="2026-01-07 03:34:24.221082295 +0000 UTC m=+110.786777060" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.238974 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-9pk7v" podStartSLOduration=90.238948112 podStartE2EDuration="1m30.238948112s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:24.237652811 +0000 UTC m=+110.803347586" watchObservedRunningTime="2026-01-07 03:34:24.238948112 +0000 UTC m=+110.804642887" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.255847 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.255902 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.255918 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.255940 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.255958 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:24Z","lastTransitionTime":"2026-01-07T03:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.258055 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tqvct" podStartSLOduration=90.258029565 podStartE2EDuration="1m30.258029565s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:24.25785716 +0000 UTC m=+110.823551925" watchObservedRunningTime="2026-01-07 03:34:24.258029565 +0000 UTC m=+110.823724330" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.359674 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.359737 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.359755 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.359778 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.359795 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:24Z","lastTransitionTime":"2026-01-07T03:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.470622 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.470713 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.470735 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.471638 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.471695 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:24Z","lastTransitionTime":"2026-01-07T03:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.574852 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.575424 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.575439 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.575463 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.575481 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:24Z","lastTransitionTime":"2026-01-07T03:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.678835 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.678896 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.678915 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.678939 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.678958 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:24Z","lastTransitionTime":"2026-01-07T03:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.734952 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.734995 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:24 crc kubenswrapper[4980]: E0107 03:34:24.735200 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.735230 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:24 crc kubenswrapper[4980]: E0107 03:34:24.735352 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:24 crc kubenswrapper[4980]: E0107 03:34:24.735458 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.781989 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.782065 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.782088 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.782160 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.782183 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:24Z","lastTransitionTime":"2026-01-07T03:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.885145 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.885648 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.885847 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.886018 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.886169 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:24Z","lastTransitionTime":"2026-01-07T03:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.989747 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.989813 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.989826 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.989856 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:24 crc kubenswrapper[4980]: I0107 03:34:24.989867 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:24Z","lastTransitionTime":"2026-01-07T03:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.092812 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.092900 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.092920 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.092950 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.092971 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:25Z","lastTransitionTime":"2026-01-07T03:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.196023 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.196080 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.196097 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.196123 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.196140 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:25Z","lastTransitionTime":"2026-01-07T03:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.299395 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.299488 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.299502 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.299528 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.299544 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:25Z","lastTransitionTime":"2026-01-07T03:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.403252 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.403342 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.403366 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.403403 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.403424 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:25Z","lastTransitionTime":"2026-01-07T03:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.506752 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.506824 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.506842 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.506869 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.506889 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:25Z","lastTransitionTime":"2026-01-07T03:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.609957 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.610028 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.610048 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.610076 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.610098 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:25Z","lastTransitionTime":"2026-01-07T03:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.712854 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.712921 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.712941 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.712970 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.712995 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:25Z","lastTransitionTime":"2026-01-07T03:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.736596 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:25 crc kubenswrapper[4980]: E0107 03:34:25.736793 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.816973 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.817048 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.817067 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.817100 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.817123 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:25Z","lastTransitionTime":"2026-01-07T03:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.920358 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.920409 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.920426 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.920451 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:25 crc kubenswrapper[4980]: I0107 03:34:25.920467 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:25Z","lastTransitionTime":"2026-01-07T03:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.023675 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.023726 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.023738 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.023758 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.023773 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:26Z","lastTransitionTime":"2026-01-07T03:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.126591 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.126673 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.126694 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.126722 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.126744 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:26Z","lastTransitionTime":"2026-01-07T03:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.229856 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.229943 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.229962 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.229990 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.230009 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:26Z","lastTransitionTime":"2026-01-07T03:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.332911 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.332971 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.332988 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.333013 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.333029 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:26Z","lastTransitionTime":"2026-01-07T03:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.435906 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.435959 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.435978 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.436000 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.436047 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:26Z","lastTransitionTime":"2026-01-07T03:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.454642 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9ct5r_3b3e552e-9608-4577-86c3-5f7573ef22f6/kube-multus/1.log" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.455113 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9ct5r_3b3e552e-9608-4577-86c3-5f7573ef22f6/kube-multus/0.log" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.455154 4980 generic.go:334] "Generic (PLEG): container finished" podID="3b3e552e-9608-4577-86c3-5f7573ef22f6" containerID="16c5f8b24cd9dff6094a430c5eb226259115e22ab47ce38eeabdc1d675a5c643" exitCode=1 Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.455196 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9ct5r" event={"ID":"3b3e552e-9608-4577-86c3-5f7573ef22f6","Type":"ContainerDied","Data":"16c5f8b24cd9dff6094a430c5eb226259115e22ab47ce38eeabdc1d675a5c643"} Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.455293 4980 scope.go:117] "RemoveContainer" containerID="5165332067f8e209a1e2c4aba3617a12f0b86c24fd008a2f5a411f00d46a5024" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.456177 4980 scope.go:117] "RemoveContainer" containerID="16c5f8b24cd9dff6094a430c5eb226259115e22ab47ce38eeabdc1d675a5c643" Jan 07 03:34:26 crc kubenswrapper[4980]: E0107 03:34:26.456441 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-9ct5r_openshift-multus(3b3e552e-9608-4577-86c3-5f7573ef22f6)\"" pod="openshift-multus/multus-9ct5r" podUID="3b3e552e-9608-4577-86c3-5f7573ef22f6" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.538913 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.538987 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.539007 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.539034 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.539054 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:26Z","lastTransitionTime":"2026-01-07T03:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.642935 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.643004 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.643024 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.643051 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.643070 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:26Z","lastTransitionTime":"2026-01-07T03:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.734933 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.735015 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.735055 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:26 crc kubenswrapper[4980]: E0107 03:34:26.735121 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:26 crc kubenswrapper[4980]: E0107 03:34:26.735279 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:26 crc kubenswrapper[4980]: E0107 03:34:26.735519 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.746058 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.746112 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.746129 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.746151 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.746167 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:26Z","lastTransitionTime":"2026-01-07T03:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.849948 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.850004 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.850027 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.850050 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.850066 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:26Z","lastTransitionTime":"2026-01-07T03:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.952546 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.952629 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.952679 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.952694 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:26 crc kubenswrapper[4980]: I0107 03:34:26.952703 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:26Z","lastTransitionTime":"2026-01-07T03:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.057242 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.057328 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.057355 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.057384 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.057413 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:27Z","lastTransitionTime":"2026-01-07T03:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.160545 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.160647 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.160672 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.160881 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.160903 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:27Z","lastTransitionTime":"2026-01-07T03:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.263282 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.263349 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.263374 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.263406 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.263431 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:27Z","lastTransitionTime":"2026-01-07T03:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.366309 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.366348 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.366356 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.366370 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.366380 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:27Z","lastTransitionTime":"2026-01-07T03:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.461416 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9ct5r_3b3e552e-9608-4577-86c3-5f7573ef22f6/kube-multus/1.log" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.469920 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.469976 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.469993 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.470015 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.470033 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:27Z","lastTransitionTime":"2026-01-07T03:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.572853 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.572909 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.572926 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.572947 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.572964 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:27Z","lastTransitionTime":"2026-01-07T03:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.675891 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.675931 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.675947 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.675971 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.675987 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:27Z","lastTransitionTime":"2026-01-07T03:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.735470 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:27 crc kubenswrapper[4980]: E0107 03:34:27.735672 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.779028 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.779106 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.779127 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.779155 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.779175 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:27Z","lastTransitionTime":"2026-01-07T03:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.881433 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.881489 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.881508 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.881539 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.881593 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:27Z","lastTransitionTime":"2026-01-07T03:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.984763 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.984810 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.984827 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.984879 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:27 crc kubenswrapper[4980]: I0107 03:34:27.984897 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:27Z","lastTransitionTime":"2026-01-07T03:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.087513 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.087587 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.087605 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.087627 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.087644 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:28Z","lastTransitionTime":"2026-01-07T03:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.190089 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.190165 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.190185 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.190215 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.190234 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:28Z","lastTransitionTime":"2026-01-07T03:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.292603 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.292676 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.292694 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.292719 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.292738 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:28Z","lastTransitionTime":"2026-01-07T03:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.395811 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.395874 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.395896 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.395926 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.395950 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:28Z","lastTransitionTime":"2026-01-07T03:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.498853 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.498919 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.498941 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.498969 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.498990 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:28Z","lastTransitionTime":"2026-01-07T03:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.602545 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.602636 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.602653 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.602679 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.602698 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:28Z","lastTransitionTime":"2026-01-07T03:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.705863 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.705936 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.705954 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.705982 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.706001 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:28Z","lastTransitionTime":"2026-01-07T03:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.735470 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.735640 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:28 crc kubenswrapper[4980]: E0107 03:34:28.735802 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.735841 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:28 crc kubenswrapper[4980]: E0107 03:34:28.735994 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:28 crc kubenswrapper[4980]: E0107 03:34:28.736086 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.737282 4980 scope.go:117] "RemoveContainer" containerID="c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac" Jan 07 03:34:28 crc kubenswrapper[4980]: E0107 03:34:28.737577 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5n7sj_openshift-ovn-kubernetes(6c962a95-c8ed-4d65-810e-1da967416c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.808999 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.809048 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.809064 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.809087 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.809106 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:28Z","lastTransitionTime":"2026-01-07T03:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.911854 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.911915 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.911937 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.911962 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:28 crc kubenswrapper[4980]: I0107 03:34:28.911985 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:28Z","lastTransitionTime":"2026-01-07T03:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.014491 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.014542 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.014594 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.014616 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.014633 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:29Z","lastTransitionTime":"2026-01-07T03:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.117420 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.117499 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.117520 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.117542 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.117612 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:29Z","lastTransitionTime":"2026-01-07T03:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.220209 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.220269 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.220288 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.220312 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.220331 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:29Z","lastTransitionTime":"2026-01-07T03:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.323149 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.323217 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.323234 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.323258 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.323275 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:29Z","lastTransitionTime":"2026-01-07T03:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.417536 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.417596 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.417607 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.417639 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.417648 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:29Z","lastTransitionTime":"2026-01-07T03:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.439901 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.439945 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.439964 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.439987 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.440004 4980 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T03:34:29Z","lastTransitionTime":"2026-01-07T03:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.480725 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs"] Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.481315 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.485158 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.485472 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.485604 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.486401 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.640392 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4dcb6147-e578-446f-8e84-2d8453e4b23f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xg6vs\" (UID: \"4dcb6147-e578-446f-8e84-2d8453e4b23f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.640528 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4dcb6147-e578-446f-8e84-2d8453e4b23f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xg6vs\" (UID: \"4dcb6147-e578-446f-8e84-2d8453e4b23f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.640632 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dcb6147-e578-446f-8e84-2d8453e4b23f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xg6vs\" (UID: \"4dcb6147-e578-446f-8e84-2d8453e4b23f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.640679 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4dcb6147-e578-446f-8e84-2d8453e4b23f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xg6vs\" (UID: \"4dcb6147-e578-446f-8e84-2d8453e4b23f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.640716 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dcb6147-e578-446f-8e84-2d8453e4b23f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xg6vs\" (UID: \"4dcb6147-e578-446f-8e84-2d8453e4b23f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.735520 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:29 crc kubenswrapper[4980]: E0107 03:34:29.735766 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.741469 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dcb6147-e578-446f-8e84-2d8453e4b23f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xg6vs\" (UID: \"4dcb6147-e578-446f-8e84-2d8453e4b23f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.741627 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4dcb6147-e578-446f-8e84-2d8453e4b23f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xg6vs\" (UID: \"4dcb6147-e578-446f-8e84-2d8453e4b23f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.741772 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dcb6147-e578-446f-8e84-2d8453e4b23f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xg6vs\" (UID: \"4dcb6147-e578-446f-8e84-2d8453e4b23f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.741703 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4dcb6147-e578-446f-8e84-2d8453e4b23f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xg6vs\" (UID: \"4dcb6147-e578-446f-8e84-2d8453e4b23f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.743989 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dcb6147-e578-446f-8e84-2d8453e4b23f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xg6vs\" (UID: \"4dcb6147-e578-446f-8e84-2d8453e4b23f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.744179 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4dcb6147-e578-446f-8e84-2d8453e4b23f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xg6vs\" (UID: \"4dcb6147-e578-446f-8e84-2d8453e4b23f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.744243 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4dcb6147-e578-446f-8e84-2d8453e4b23f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xg6vs\" (UID: \"4dcb6147-e578-446f-8e84-2d8453e4b23f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.744477 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4dcb6147-e578-446f-8e84-2d8453e4b23f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xg6vs\" (UID: \"4dcb6147-e578-446f-8e84-2d8453e4b23f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.754538 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dcb6147-e578-446f-8e84-2d8453e4b23f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xg6vs\" (UID: \"4dcb6147-e578-446f-8e84-2d8453e4b23f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.768046 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4dcb6147-e578-446f-8e84-2d8453e4b23f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xg6vs\" (UID: \"4dcb6147-e578-446f-8e84-2d8453e4b23f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" Jan 07 03:34:29 crc kubenswrapper[4980]: I0107 03:34:29.805797 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" Jan 07 03:34:30 crc kubenswrapper[4980]: I0107 03:34:30.480084 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" event={"ID":"4dcb6147-e578-446f-8e84-2d8453e4b23f","Type":"ContainerStarted","Data":"574d2367b9f6c13be9a15088e78e3e05ce39aa6b5b19378265e1f73f4cf8812b"} Jan 07 03:34:30 crc kubenswrapper[4980]: I0107 03:34:30.480152 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" event={"ID":"4dcb6147-e578-446f-8e84-2d8453e4b23f","Type":"ContainerStarted","Data":"0eae2bf1ba44add5d23136b3096520570297e0ca88d2dd4acc4bb87a7411d51d"} Jan 07 03:34:30 crc kubenswrapper[4980]: I0107 03:34:30.498610 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xg6vs" podStartSLOduration=97.498550598 podStartE2EDuration="1m37.498550598s" podCreationTimestamp="2026-01-07 03:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:30.497086863 +0000 UTC m=+117.062781628" watchObservedRunningTime="2026-01-07 03:34:30.498550598 +0000 UTC m=+117.064245373" Jan 07 03:34:30 crc kubenswrapper[4980]: I0107 03:34:30.735027 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:30 crc kubenswrapper[4980]: I0107 03:34:30.735086 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:30 crc kubenswrapper[4980]: I0107 03:34:30.735037 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:30 crc kubenswrapper[4980]: E0107 03:34:30.735234 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:30 crc kubenswrapper[4980]: E0107 03:34:30.735381 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:30 crc kubenswrapper[4980]: E0107 03:34:30.735510 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:31 crc kubenswrapper[4980]: I0107 03:34:31.735039 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:31 crc kubenswrapper[4980]: E0107 03:34:31.735379 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:32 crc kubenswrapper[4980]: I0107 03:34:32.735032 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:32 crc kubenswrapper[4980]: I0107 03:34:32.735070 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:32 crc kubenswrapper[4980]: I0107 03:34:32.735136 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:32 crc kubenswrapper[4980]: E0107 03:34:32.735190 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:32 crc kubenswrapper[4980]: E0107 03:34:32.735294 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:32 crc kubenswrapper[4980]: E0107 03:34:32.735494 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:33 crc kubenswrapper[4980]: E0107 03:34:33.675727 4980 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 07 03:34:33 crc kubenswrapper[4980]: I0107 03:34:33.735441 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:33 crc kubenswrapper[4980]: E0107 03:34:33.737701 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:33 crc kubenswrapper[4980]: E0107 03:34:33.837814 4980 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 07 03:34:34 crc kubenswrapper[4980]: I0107 03:34:34.735311 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:34 crc kubenswrapper[4980]: I0107 03:34:34.735354 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:34 crc kubenswrapper[4980]: E0107 03:34:34.735969 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:34 crc kubenswrapper[4980]: I0107 03:34:34.735411 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:34 crc kubenswrapper[4980]: E0107 03:34:34.736068 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:34 crc kubenswrapper[4980]: E0107 03:34:34.735837 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:35 crc kubenswrapper[4980]: I0107 03:34:35.736061 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:35 crc kubenswrapper[4980]: E0107 03:34:35.736237 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:36 crc kubenswrapper[4980]: I0107 03:34:36.734721 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:36 crc kubenswrapper[4980]: I0107 03:34:36.734722 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:36 crc kubenswrapper[4980]: E0107 03:34:36.734885 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:36 crc kubenswrapper[4980]: E0107 03:34:36.734994 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:36 crc kubenswrapper[4980]: I0107 03:34:36.734742 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:36 crc kubenswrapper[4980]: E0107 03:34:36.735091 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:37 crc kubenswrapper[4980]: I0107 03:34:37.735420 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:37 crc kubenswrapper[4980]: E0107 03:34:37.735656 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:38 crc kubenswrapper[4980]: I0107 03:34:38.735319 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:38 crc kubenswrapper[4980]: I0107 03:34:38.735356 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:38 crc kubenswrapper[4980]: E0107 03:34:38.735488 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:38 crc kubenswrapper[4980]: I0107 03:34:38.736308 4980 scope.go:117] "RemoveContainer" containerID="16c5f8b24cd9dff6094a430c5eb226259115e22ab47ce38eeabdc1d675a5c643" Jan 07 03:34:38 crc kubenswrapper[4980]: E0107 03:34:38.736630 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:38 crc kubenswrapper[4980]: I0107 03:34:38.736722 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:38 crc kubenswrapper[4980]: E0107 03:34:38.737001 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:38 crc kubenswrapper[4980]: E0107 03:34:38.839735 4980 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 07 03:34:39 crc kubenswrapper[4980]: I0107 03:34:39.513728 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9ct5r_3b3e552e-9608-4577-86c3-5f7573ef22f6/kube-multus/1.log" Jan 07 03:34:39 crc kubenswrapper[4980]: I0107 03:34:39.513824 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9ct5r" event={"ID":"3b3e552e-9608-4577-86c3-5f7573ef22f6","Type":"ContainerStarted","Data":"009f39239d8a76a13491308f9e197bfa3b38115c0fa817eb2a9167194b0bb5a3"} Jan 07 03:34:39 crc kubenswrapper[4980]: I0107 03:34:39.734762 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:39 crc kubenswrapper[4980]: E0107 03:34:39.735033 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:40 crc kubenswrapper[4980]: I0107 03:34:40.735308 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:40 crc kubenswrapper[4980]: I0107 03:34:40.735338 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:40 crc kubenswrapper[4980]: E0107 03:34:40.735880 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:40 crc kubenswrapper[4980]: I0107 03:34:40.735394 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:40 crc kubenswrapper[4980]: E0107 03:34:40.736011 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:40 crc kubenswrapper[4980]: E0107 03:34:40.736148 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:41 crc kubenswrapper[4980]: I0107 03:34:41.735588 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:41 crc kubenswrapper[4980]: E0107 03:34:41.735786 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:42 crc kubenswrapper[4980]: I0107 03:34:42.735093 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:42 crc kubenswrapper[4980]: I0107 03:34:42.735183 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:42 crc kubenswrapper[4980]: E0107 03:34:42.735291 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:42 crc kubenswrapper[4980]: I0107 03:34:42.735397 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:42 crc kubenswrapper[4980]: E0107 03:34:42.735593 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:42 crc kubenswrapper[4980]: E0107 03:34:42.735991 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:43 crc kubenswrapper[4980]: I0107 03:34:43.735603 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:43 crc kubenswrapper[4980]: E0107 03:34:43.737840 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:43 crc kubenswrapper[4980]: I0107 03:34:43.738323 4980 scope.go:117] "RemoveContainer" containerID="c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac" Jan 07 03:34:43 crc kubenswrapper[4980]: E0107 03:34:43.840881 4980 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 07 03:34:44 crc kubenswrapper[4980]: I0107 03:34:44.534178 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovnkube-controller/3.log" Jan 07 03:34:44 crc kubenswrapper[4980]: I0107 03:34:44.539814 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerStarted","Data":"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd"} Jan 07 03:34:44 crc kubenswrapper[4980]: I0107 03:34:44.540460 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:34:44 crc kubenswrapper[4980]: I0107 03:34:44.576899 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podStartSLOduration=111.576870931 podStartE2EDuration="1m51.576870931s" podCreationTimestamp="2026-01-07 03:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:44.576753758 +0000 UTC m=+131.142448533" watchObservedRunningTime="2026-01-07 03:34:44.576870931 +0000 UTC m=+131.142565696" Jan 07 03:34:44 crc kubenswrapper[4980]: I0107 03:34:44.669270 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-j75z7"] Jan 07 03:34:44 crc kubenswrapper[4980]: I0107 03:34:44.669442 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:44 crc kubenswrapper[4980]: E0107 03:34:44.669611 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:44 crc kubenswrapper[4980]: I0107 03:34:44.735692 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:44 crc kubenswrapper[4980]: I0107 03:34:44.735759 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:44 crc kubenswrapper[4980]: I0107 03:34:44.735711 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:44 crc kubenswrapper[4980]: E0107 03:34:44.735873 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:44 crc kubenswrapper[4980]: E0107 03:34:44.736002 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:44 crc kubenswrapper[4980]: E0107 03:34:44.736081 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:46 crc kubenswrapper[4980]: I0107 03:34:46.735355 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:46 crc kubenswrapper[4980]: I0107 03:34:46.735393 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:46 crc kubenswrapper[4980]: I0107 03:34:46.735419 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:46 crc kubenswrapper[4980]: I0107 03:34:46.735363 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:46 crc kubenswrapper[4980]: E0107 03:34:46.735537 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:46 crc kubenswrapper[4980]: E0107 03:34:46.735655 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:46 crc kubenswrapper[4980]: E0107 03:34:46.735777 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:46 crc kubenswrapper[4980]: E0107 03:34:46.735869 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:48 crc kubenswrapper[4980]: I0107 03:34:48.734877 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:48 crc kubenswrapper[4980]: I0107 03:34:48.734940 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:48 crc kubenswrapper[4980]: I0107 03:34:48.734985 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:48 crc kubenswrapper[4980]: E0107 03:34:48.735016 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 07 03:34:48 crc kubenswrapper[4980]: I0107 03:34:48.734876 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:48 crc kubenswrapper[4980]: E0107 03:34:48.735231 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j75z7" podUID="1e3c7945-f3cb-4af2-8a0f-19b014123f74" Jan 07 03:34:48 crc kubenswrapper[4980]: E0107 03:34:48.735340 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 07 03:34:48 crc kubenswrapper[4980]: E0107 03:34:48.735486 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.810932 4980 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.871746 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nrvkf"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.872496 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.873000 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-pt6jg"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.873770 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.874228 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sjzgd"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.874901 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-sjzgd" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.875801 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.876403 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.877258 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.878207 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-hn52l"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.878298 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.878742 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hn52l" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.880381 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-x9kxk"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.880736 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-x9kxk" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.885197 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.885436 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.885780 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.886203 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.886444 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.886598 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdszr"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.887071 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdszr" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.887905 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-46blp"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.888080 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.888606 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.889294 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.889615 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.889872 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.890100 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.890285 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.891630 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.892216 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.894748 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.896734 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.897423 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.901853 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.902019 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.902046 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.903637 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.903823 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.904026 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.904175 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.908639 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frq5q"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.909412 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mfrps"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.912871 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qrmll"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.914160 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qrmll" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.914758 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.916720 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.918223 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-t7kvf"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.919071 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.919497 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frq5q" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.928965 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.929343 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.942567 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.943173 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.943384 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.944034 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.945822 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vtrzt"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.946355 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.947948 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.948100 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.948221 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.948317 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.948401 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.948497 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.948609 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.948709 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.948811 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.948906 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.948993 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.949085 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.949213 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.949323 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.949418 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.950014 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.950120 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.950224 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.950332 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.951343 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.951797 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.951835 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.951843 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.951976 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.952033 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.952079 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.952087 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.952094 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.952260 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.952288 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.952259 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.952490 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.952576 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.952664 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.952703 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.952494 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.952812 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.952834 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.952709 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.952782 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.952926 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.953166 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.953292 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.954097 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.954615 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.954728 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.954879 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.955064 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.955242 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.954910 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.955083 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.955509 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.955573 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.955699 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.955812 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.955851 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.955706 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.956077 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.956204 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.956371 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.956471 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.956616 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.956696 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.956854 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.956982 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6wjp7"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.957845 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6wjp7" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.958158 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.960624 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nql9v"] Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.963976 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.964040 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.965946 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.966532 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.970026 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.994014 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74d1f814-c65c-4ef3-91a2-911e9f23d634-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.996107 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.996315 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.998056 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.998062 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.998114 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/74d1f814-c65c-4ef3-91a2-911e9f23d634-encryption-config\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:49 crc kubenswrapper[4980]: I0107 03:34:49.998319 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.009300 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.009592 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-config\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.009852 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.010124 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.010350 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.010445 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.010911 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e765fa33-9019-4903-9dc2-5ec87e89c0fe-auth-proxy-config\") pod \"machine-approver-56656f9798-99nxn\" (UID: \"e765fa33-9019-4903-9dc2-5ec87e89c0fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.011068 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-audit-dir\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.011181 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e765fa33-9019-4903-9dc2-5ec87e89c0fe-machine-approver-tls\") pod \"machine-approver-56656f9798-99nxn\" (UID: \"e765fa33-9019-4903-9dc2-5ec87e89c0fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.011277 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zbz\" (UniqueName: \"kubernetes.io/projected/b7a65dcf-9933-4d66-92c4-e1c9d9e209e9-kube-api-access-r5zbz\") pod \"machine-api-operator-5694c8668f-sjzgd\" (UID: \"b7a65dcf-9933-4d66-92c4-e1c9d9e209e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjzgd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.011361 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3-etcd-service-ca\") pod \"etcd-operator-b45778765-t7kvf\" (UID: \"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.011448 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-service-ca\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.011542 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-etcd-serving-ca\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.011643 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrzkn\" (UniqueName: \"kubernetes.io/projected/c14c8560-62e3-43b1-9916-f3bbbb9f3fd9-kube-api-access-mrzkn\") pod \"console-operator-58897d9998-x9kxk\" (UID: \"c14c8560-62e3-43b1-9916-f3bbbb9f3fd9\") " pod="openshift-console-operator/console-operator-58897d9998-x9kxk" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.011713 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7nf4\" (UniqueName: \"kubernetes.io/projected/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-kube-api-access-d7nf4\") pod \"route-controller-manager-6576b87f9c-7rdqg\" (UID: \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.011785 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5948f721-3c3a-4f73-90f1-cb7a5d101df1-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nrvkf\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.011393 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.011866 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w554b\" (UniqueName: \"kubernetes.io/projected/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-kube-api-access-w554b\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.011972 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39cd077a-5ec6-43f5-b541-f50be415eca7-serving-cert\") pod \"openshift-config-operator-7777fb866f-g6pfb\" (UID: \"39cd077a-5ec6-43f5-b541-f50be415eca7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.011990 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-serving-cert\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012007 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/063cfd7b-7d93-45bc-a374-99b5e204b200-console-oauth-config\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012021 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5948f721-3c3a-4f73-90f1-cb7a5d101df1-serving-cert\") pod \"controller-manager-879f6c89f-nrvkf\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012035 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsbft\" (UniqueName: \"kubernetes.io/projected/063cfd7b-7d93-45bc-a374-99b5e204b200-kube-api-access-xsbft\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012050 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b7a65dcf-9933-4d66-92c4-e1c9d9e209e9-images\") pod \"machine-api-operator-5694c8668f-sjzgd\" (UID: \"b7a65dcf-9933-4d66-92c4-e1c9d9e209e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjzgd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012066 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c14c8560-62e3-43b1-9916-f3bbbb9f3fd9-trusted-ca\") pod \"console-operator-58897d9998-x9kxk\" (UID: \"c14c8560-62e3-43b1-9916-f3bbbb9f3fd9\") " pod="openshift-console-operator/console-operator-58897d9998-x9kxk" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012081 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-config\") pod \"route-controller-manager-6576b87f9c-7rdqg\" (UID: \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012099 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b7a65dcf-9933-4d66-92c4-e1c9d9e209e9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sjzgd\" (UID: \"b7a65dcf-9933-4d66-92c4-e1c9d9e209e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjzgd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012143 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmzjd\" (UniqueName: \"kubernetes.io/projected/1c4b3948-0466-411a-8180-5755301bd715-kube-api-access-mmzjd\") pod \"authentication-operator-69f744f599-mfrps\" (UID: \"1c4b3948-0466-411a-8180-5755301bd715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012172 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-node-pullsecrets\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012202 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hdszr\" (UID: \"2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdszr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012235 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c4b3948-0466-411a-8180-5755301bd715-serving-cert\") pod \"authentication-operator-69f744f599-mfrps\" (UID: \"1c4b3948-0466-411a-8180-5755301bd715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012264 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74d1f814-c65c-4ef3-91a2-911e9f23d634-audit-dir\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012331 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3-etcd-client\") pod \"etcd-operator-b45778765-t7kvf\" (UID: \"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012355 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/74d1f814-c65c-4ef3-91a2-911e9f23d634-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012380 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-encryption-config\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012403 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e765fa33-9019-4903-9dc2-5ec87e89c0fe-config\") pod \"machine-approver-56656f9798-99nxn\" (UID: \"e765fa33-9019-4903-9dc2-5ec87e89c0fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012429 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74d1f814-c65c-4ef3-91a2-911e9f23d634-serving-cert\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012448 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c4b3948-0466-411a-8180-5755301bd715-service-ca-bundle\") pod \"authentication-operator-69f744f599-mfrps\" (UID: \"1c4b3948-0466-411a-8180-5755301bd715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012471 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vhhq\" (UniqueName: \"kubernetes.io/projected/9ccaed08-3b4a-4364-a5a3-4c2d456e9358-kube-api-access-8vhhq\") pod \"dns-operator-744455d44c-qrmll\" (UID: \"9ccaed08-3b4a-4364-a5a3-4c2d456e9358\") " pod="openshift-dns-operator/dns-operator-744455d44c-qrmll" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012501 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-client-ca\") pod \"route-controller-manager-6576b87f9c-7rdqg\" (UID: \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012574 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7a65dcf-9933-4d66-92c4-e1c9d9e209e9-config\") pod \"machine-api-operator-5694c8668f-sjzgd\" (UID: \"b7a65dcf-9933-4d66-92c4-e1c9d9e209e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjzgd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012598 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c4b3948-0466-411a-8180-5755301bd715-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mfrps\" (UID: \"1c4b3948-0466-411a-8180-5755301bd715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012618 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3-serving-cert\") pod \"etcd-operator-b45778765-t7kvf\" (UID: \"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012644 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-image-import-ca\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.012663 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-oauth-serving-cert\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.013101 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c4b3948-0466-411a-8180-5755301bd715-config\") pod \"authentication-operator-69f744f599-mfrps\" (UID: \"1c4b3948-0466-411a-8180-5755301bd715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.013123 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/74d1f814-c65c-4ef3-91a2-911e9f23d634-etcd-client\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.013158 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hdszr\" (UID: \"2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdszr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.013173 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9cd022ff-cd8d-4e6d-8325-491d91146a99-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-frq5q\" (UID: \"9cd022ff-cd8d-4e6d-8325-491d91146a99\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frq5q" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.013187 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxxvg\" (UniqueName: \"kubernetes.io/projected/74d1f814-c65c-4ef3-91a2-911e9f23d634-kube-api-access-zxxvg\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.013200 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5948f721-3c3a-4f73-90f1-cb7a5d101df1-config\") pod \"controller-manager-879f6c89f-nrvkf\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.013872 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl64g\" (UniqueName: \"kubernetes.io/projected/e765fa33-9019-4903-9dc2-5ec87e89c0fe-kube-api-access-jl64g\") pod \"machine-approver-56656f9798-99nxn\" (UID: \"e765fa33-9019-4903-9dc2-5ec87e89c0fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.013899 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsjxk\" (UniqueName: \"kubernetes.io/projected/9cd022ff-cd8d-4e6d-8325-491d91146a99-kube-api-access-hsjxk\") pod \"cluster-samples-operator-665b6dd947-frq5q\" (UID: \"9cd022ff-cd8d-4e6d-8325-491d91146a99\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frq5q" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.013924 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z45b\" (UniqueName: \"kubernetes.io/projected/aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3-kube-api-access-8z45b\") pod \"etcd-operator-b45778765-t7kvf\" (UID: \"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.013958 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ccaed08-3b4a-4364-a5a3-4c2d456e9358-metrics-tls\") pod \"dns-operator-744455d44c-qrmll\" (UID: \"9ccaed08-3b4a-4364-a5a3-4c2d456e9358\") " pod="openshift-dns-operator/dns-operator-744455d44c-qrmll" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.013973 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c14c8560-62e3-43b1-9916-f3bbbb9f3fd9-serving-cert\") pod \"console-operator-58897d9998-x9kxk\" (UID: \"c14c8560-62e3-43b1-9916-f3bbbb9f3fd9\") " pod="openshift-console-operator/console-operator-58897d9998-x9kxk" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.013988 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-etcd-client\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.014026 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/063cfd7b-7d93-45bc-a374-99b5e204b200-console-serving-cert\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.014044 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74d1f814-c65c-4ef3-91a2-911e9f23d634-audit-policies\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.014059 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-audit\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.014073 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.014109 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmqch\" (UniqueName: \"kubernetes.io/projected/2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8-kube-api-access-wmqch\") pod \"openshift-controller-manager-operator-756b6f6bc6-hdszr\" (UID: \"2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdszr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.014126 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5948f721-3c3a-4f73-90f1-cb7a5d101df1-client-ca\") pod \"controller-manager-879f6c89f-nrvkf\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.014140 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj6vc\" (UniqueName: \"kubernetes.io/projected/aa545c14-ac24-490a-a488-9d26b26e6ea2-kube-api-access-tj6vc\") pod \"downloads-7954f5f757-hn52l\" (UID: \"aa545c14-ac24-490a-a488-9d26b26e6ea2\") " pod="openshift-console/downloads-7954f5f757-hn52l" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.014157 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-trusted-ca-bundle\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.014200 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm592\" (UniqueName: \"kubernetes.io/projected/5948f721-3c3a-4f73-90f1-cb7a5d101df1-kube-api-access-mm592\") pod \"controller-manager-879f6c89f-nrvkf\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.014215 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3-config\") pod \"etcd-operator-b45778765-t7kvf\" (UID: \"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.014233 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c14c8560-62e3-43b1-9916-f3bbbb9f3fd9-config\") pod \"console-operator-58897d9998-x9kxk\" (UID: \"c14c8560-62e3-43b1-9916-f3bbbb9f3fd9\") " pod="openshift-console-operator/console-operator-58897d9998-x9kxk" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.014271 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/39cd077a-5ec6-43f5-b541-f50be415eca7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g6pfb\" (UID: \"39cd077a-5ec6-43f5-b541-f50be415eca7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.014288 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2sj2\" (UniqueName: \"kubernetes.io/projected/39cd077a-5ec6-43f5-b541-f50be415eca7-kube-api-access-s2sj2\") pod \"openshift-config-operator-7777fb866f-g6pfb\" (UID: \"39cd077a-5ec6-43f5-b541-f50be415eca7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.014306 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3-etcd-ca\") pod \"etcd-operator-b45778765-t7kvf\" (UID: \"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.014346 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-console-config\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.014369 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-serving-cert\") pod \"route-controller-manager-6576b87f9c-7rdqg\" (UID: \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.015117 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-fn7tq"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.015546 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nrvkf"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.015723 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.015732 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9kdrd"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.015761 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.018335 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.020720 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.021266 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-znb5v"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.021611 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.021775 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.020472 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9kdrd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.022168 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bqgbx"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.022245 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-znb5v" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.022590 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4q4f7"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.022856 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.023176 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-pt6jg"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.023285 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gx94v"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.023998 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.024509 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdszr"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.024640 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-x9kxk"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.024785 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.022859 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bqgbx" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.025231 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gx94v" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.025471 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4q4f7" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.025715 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mfrps"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.027028 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.027037 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6gsc5"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.032851 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gcqrg"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.033034 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.033420 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79w2t"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.033640 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6gsc5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.033748 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gcqrg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.034382 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hc49h"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.034428 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79w2t" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.034988 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-d52pc"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.035492 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sjzgd"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.035611 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-46blp"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.035740 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.035811 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hn52l"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.035877 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.036314 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.036693 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.036805 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.036768 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.036325 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-d52pc" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.036724 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.038084 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xtj2h"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.038175 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.038867 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9pdgk"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.039027 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xtj2h" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.039251 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9pdgk" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.039803 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.040809 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.041709 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qrmll"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.042679 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vtrzt"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.049319 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-t7kvf"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.055254 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.055363 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.058764 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9kdrd"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.060023 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6wjp7"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.061429 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gcqrg"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.062690 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6gsc5"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.065199 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-znb5v"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.067224 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.068967 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.070039 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4q4f7"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.072046 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bqgbx"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.073014 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frq5q"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.074017 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.074895 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.075938 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nql9v"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.076931 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-d52pc"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.079690 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.079751 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6nqsq"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.081273 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.081929 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hc49h"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.081959 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xtj2h"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.082004 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gx94v"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.082122 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.083013 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79w2t"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.084190 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.084923 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.086971 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6nqsq"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.087922 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.093045 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-fl82f"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.093779 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xqhnv"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.094145 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fl82f" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.094717 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xqhnv" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.095345 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.101037 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fl82f"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.102445 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xqhnv"] Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.114709 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.114980 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsjxk\" (UniqueName: \"kubernetes.io/projected/9cd022ff-cd8d-4e6d-8325-491d91146a99-kube-api-access-hsjxk\") pod \"cluster-samples-operator-665b6dd947-frq5q\" (UID: \"9cd022ff-cd8d-4e6d-8325-491d91146a99\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frq5q" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115023 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z45b\" (UniqueName: \"kubernetes.io/projected/aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3-kube-api-access-8z45b\") pod \"etcd-operator-b45778765-t7kvf\" (UID: \"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115052 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c14c8560-62e3-43b1-9916-f3bbbb9f3fd9-serving-cert\") pod \"console-operator-58897d9998-x9kxk\" (UID: \"c14c8560-62e3-43b1-9916-f3bbbb9f3fd9\") " pod="openshift-console-operator/console-operator-58897d9998-x9kxk" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115078 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-etcd-client\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115110 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9287\" (UniqueName: \"kubernetes.io/projected/ee35d058-0536-49f4-a31f-d19f858f2a37-kube-api-access-n9287\") pod \"service-ca-9c57cc56f-gcqrg\" (UID: \"ee35d058-0536-49f4-a31f-d19f858f2a37\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcqrg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115135 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7q47\" (UniqueName: \"kubernetes.io/projected/c52d9fdd-cc98-4b50-a85f-6d206363fdb3-kube-api-access-k7q47\") pod \"machine-config-controller-84d6567774-hcrkr\" (UID: \"c52d9fdd-cc98-4b50-a85f-6d206363fdb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115159 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-audit\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115186 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9kdrd\" (UID: \"ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9kdrd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115212 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115304 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ee35d058-0536-49f4-a31f-d19f858f2a37-signing-cabundle\") pod \"service-ca-9c57cc56f-gcqrg\" (UID: \"ee35d058-0536-49f4-a31f-d19f858f2a37\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcqrg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115333 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-trusted-ca-bundle\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115358 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3-config\") pod \"etcd-operator-b45778765-t7kvf\" (UID: \"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115381 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3f49713b-0fe7-4b75-aefe-c44cf397b444-tmpfs\") pod \"packageserver-d55dfcdfc-pc8qr\" (UID: \"3f49713b-0fe7-4b75-aefe-c44cf397b444\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115419 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3-etcd-ca\") pod \"etcd-operator-b45778765-t7kvf\" (UID: \"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115443 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115469 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k22pv\" (UniqueName: \"kubernetes.io/projected/5c60a3ab-4428-4658-9d0b-5ed1608bd379-kube-api-access-k22pv\") pod \"marketplace-operator-79b997595-hc49h\" (UID: \"5c60a3ab-4428-4658-9d0b-5ed1608bd379\") " pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115495 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115520 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/74d1f814-c65c-4ef3-91a2-911e9f23d634-encryption-config\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115541 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-config\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115584 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk74b\" (UniqueName: \"kubernetes.io/projected/e6badf96-ed64-4177-9bb3-9cc00ac5ce09-kube-api-access-sk74b\") pod \"ingress-operator-5b745b69d9-q7vwd\" (UID: \"e6badf96-ed64-4177-9bb3-9cc00ac5ce09\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115610 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e765fa33-9019-4903-9dc2-5ec87e89c0fe-machine-approver-tls\") pod \"machine-approver-56656f9798-99nxn\" (UID: \"e765fa33-9019-4903-9dc2-5ec87e89c0fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115632 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3f49713b-0fe7-4b75-aefe-c44cf397b444-webhook-cert\") pod \"packageserver-d55dfcdfc-pc8qr\" (UID: \"3f49713b-0fe7-4b75-aefe-c44cf397b444\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115659 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5zbz\" (UniqueName: \"kubernetes.io/projected/b7a65dcf-9933-4d66-92c4-e1c9d9e209e9-kube-api-access-r5zbz\") pod \"machine-api-operator-5694c8668f-sjzgd\" (UID: \"b7a65dcf-9933-4d66-92c4-e1c9d9e209e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjzgd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115683 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-service-ca\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115707 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115744 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-audit-policies\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115768 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-secret-volume\") pod \"collect-profiles-29462610-54bwr\" (UID: \"7a5331ee-46c7-4826-85cf-3c57f25f1d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115794 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-etcd-serving-ca\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115819 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s42kz\" (UniqueName: \"kubernetes.io/projected/3f49713b-0fe7-4b75-aefe-c44cf397b444-kube-api-access-s42kz\") pod \"packageserver-d55dfcdfc-pc8qr\" (UID: \"3f49713b-0fe7-4b75-aefe-c44cf397b444\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115844 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrzkn\" (UniqueName: \"kubernetes.io/projected/c14c8560-62e3-43b1-9916-f3bbbb9f3fd9-kube-api-access-mrzkn\") pod \"console-operator-58897d9998-x9kxk\" (UID: \"c14c8560-62e3-43b1-9916-f3bbbb9f3fd9\") " pod="openshift-console-operator/console-operator-58897d9998-x9kxk" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115868 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7nf4\" (UniqueName: \"kubernetes.io/projected/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-kube-api-access-d7nf4\") pod \"route-controller-manager-6576b87f9c-7rdqg\" (UID: \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115894 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s94qv\" (UniqueName: \"kubernetes.io/projected/727de252-9019-46b7-8b54-25d3c80d5437-kube-api-access-s94qv\") pod \"openshift-apiserver-operator-796bbdcf4f-6wjp7\" (UID: \"727de252-9019-46b7-8b54-25d3c80d5437\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6wjp7" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.115950 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w554b\" (UniqueName: \"kubernetes.io/projected/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-kube-api-access-w554b\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116001 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6badf96-ed64-4177-9bb3-9cc00ac5ce09-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q7vwd\" (UID: \"e6badf96-ed64-4177-9bb3-9cc00ac5ce09\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116031 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9kdrd\" (UID: \"ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9kdrd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116056 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/063cfd7b-7d93-45bc-a374-99b5e204b200-console-oauth-config\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116078 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/727de252-9019-46b7-8b54-25d3c80d5437-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6wjp7\" (UID: \"727de252-9019-46b7-8b54-25d3c80d5437\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6wjp7" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116104 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5948f721-3c3a-4f73-90f1-cb7a5d101df1-serving-cert\") pod \"controller-manager-879f6c89f-nrvkf\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116128 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsbft\" (UniqueName: \"kubernetes.io/projected/063cfd7b-7d93-45bc-a374-99b5e204b200-kube-api-access-xsbft\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116155 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116178 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7bc96f0c-6550-4a0f-9354-b1c6ae591b75-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mwcbv\" (UID: \"7bc96f0c-6550-4a0f-9354-b1c6ae591b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116204 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-config\") pod \"route-controller-manager-6576b87f9c-7rdqg\" (UID: \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116229 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-node-pullsecrets\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116262 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c4b3948-0466-411a-8180-5755301bd715-serving-cert\") pod \"authentication-operator-69f744f599-mfrps\" (UID: \"1c4b3948-0466-411a-8180-5755301bd715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116287 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx9c9\" (UniqueName: \"kubernetes.io/projected/e329501d-6d85-4f84-b9a2-3f29a0f41881-kube-api-access-dx9c9\") pod \"multus-admission-controller-857f4d67dd-d52pc\" (UID: \"e329501d-6d85-4f84-b9a2-3f29a0f41881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d52pc" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116390 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116424 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3-etcd-client\") pod \"etcd-operator-b45778765-t7kvf\" (UID: \"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116448 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74d1f814-c65c-4ef3-91a2-911e9f23d634-serving-cert\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116471 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-encryption-config\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116498 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrc4q\" (UniqueName: \"kubernetes.io/projected/2a6e4d95-beda-46e5-8030-8f4f590cc22e-kube-api-access-qrc4q\") pod \"router-default-5444994796-fn7tq\" (UID: \"2a6e4d95-beda-46e5-8030-8f4f590cc22e\") " pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116522 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2a6e4d95-beda-46e5-8030-8f4f590cc22e-default-certificate\") pod \"router-default-5444994796-fn7tq\" (UID: \"2a6e4d95-beda-46e5-8030-8f4f590cc22e\") " pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116546 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2crgz\" (UID: \"aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116588 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c4b3948-0466-411a-8180-5755301bd715-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mfrps\" (UID: \"1c4b3948-0466-411a-8180-5755301bd715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116626 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxxvg\" (UniqueName: \"kubernetes.io/projected/74d1f814-c65c-4ef3-91a2-911e9f23d634-kube-api-access-zxxvg\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116653 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hdszr\" (UID: \"2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdszr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116678 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9cd022ff-cd8d-4e6d-8325-491d91146a99-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-frq5q\" (UID: \"9cd022ff-cd8d-4e6d-8325-491d91146a99\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frq5q" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116715 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5948f721-3c3a-4f73-90f1-cb7a5d101df1-config\") pod \"controller-manager-879f6c89f-nrvkf\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116740 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl64g\" (UniqueName: \"kubernetes.io/projected/e765fa33-9019-4903-9dc2-5ec87e89c0fe-kube-api-access-jl64g\") pod \"machine-approver-56656f9798-99nxn\" (UID: \"e765fa33-9019-4903-9dc2-5ec87e89c0fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116766 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e-srv-cert\") pod \"olm-operator-6b444d44fb-2crgz\" (UID: \"aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116791 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6badf96-ed64-4177-9bb3-9cc00ac5ce09-trusted-ca\") pod \"ingress-operator-5b745b69d9-q7vwd\" (UID: \"e6badf96-ed64-4177-9bb3-9cc00ac5ce09\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116813 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d67a9936-fafd-4a76-aac2-6209c4697007-serving-cert\") pod \"service-ca-operator-777779d784-6gsc5\" (UID: \"d67a9936-fafd-4a76-aac2-6209c4697007\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6gsc5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116834 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a6e4d95-beda-46e5-8030-8f4f590cc22e-metrics-certs\") pod \"router-default-5444994796-fn7tq\" (UID: \"2a6e4d95-beda-46e5-8030-8f4f590cc22e\") " pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116856 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5c60a3ab-4428-4658-9d0b-5ed1608bd379-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hc49h\" (UID: \"5c60a3ab-4428-4658-9d0b-5ed1608bd379\") " pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116883 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ccaed08-3b4a-4364-a5a3-4c2d456e9358-metrics-tls\") pod \"dns-operator-744455d44c-qrmll\" (UID: \"9ccaed08-3b4a-4364-a5a3-4c2d456e9358\") " pod="openshift-dns-operator/dns-operator-744455d44c-qrmll" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116907 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/063cfd7b-7d93-45bc-a374-99b5e204b200-console-serving-cert\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116932 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74d1f814-c65c-4ef3-91a2-911e9f23d634-audit-policies\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116955 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlkrt\" (UniqueName: \"kubernetes.io/projected/d67a9936-fafd-4a76-aac2-6209c4697007-kube-api-access-vlkrt\") pod \"service-ca-operator-777779d784-6gsc5\" (UID: \"d67a9936-fafd-4a76-aac2-6209c4697007\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6gsc5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.116978 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c60a3ab-4428-4658-9d0b-5ed1608bd379-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hc49h\" (UID: \"5c60a3ab-4428-4658-9d0b-5ed1608bd379\") " pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117000 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/307b87f4-a717-4813-9ef9-4d44ca9e33f5-config\") pod \"kube-controller-manager-operator-78b949d7b-bqgbx\" (UID: \"307b87f4-a717-4813-9ef9-4d44ca9e33f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bqgbx" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117023 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a6e4d95-beda-46e5-8030-8f4f590cc22e-service-ca-bundle\") pod \"router-default-5444994796-fn7tq\" (UID: \"2a6e4d95-beda-46e5-8030-8f4f590cc22e\") " pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117045 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7bc96f0c-6550-4a0f-9354-b1c6ae591b75-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mwcbv\" (UID: \"7bc96f0c-6550-4a0f-9354-b1c6ae591b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117059 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-service-ca\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117070 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117099 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmqch\" (UniqueName: \"kubernetes.io/projected/2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8-kube-api-access-wmqch\") pod \"openshift-controller-manager-operator-756b6f6bc6-hdszr\" (UID: \"2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdszr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117122 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-config-volume\") pod \"collect-profiles-29462610-54bwr\" (UID: \"7a5331ee-46c7-4826-85cf-3c57f25f1d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117145 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5948f721-3c3a-4f73-90f1-cb7a5d101df1-client-ca\") pod \"controller-manager-879f6c89f-nrvkf\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117167 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj6vc\" (UniqueName: \"kubernetes.io/projected/aa545c14-ac24-490a-a488-9d26b26e6ea2-kube-api-access-tj6vc\") pod \"downloads-7954f5f757-hn52l\" (UID: \"aa545c14-ac24-490a-a488-9d26b26e6ea2\") " pod="openshift-console/downloads-7954f5f757-hn52l" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117189 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6badf96-ed64-4177-9bb3-9cc00ac5ce09-metrics-tls\") pod \"ingress-operator-5b745b69d9-q7vwd\" (UID: \"e6badf96-ed64-4177-9bb3-9cc00ac5ce09\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117215 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm592\" (UniqueName: \"kubernetes.io/projected/5948f721-3c3a-4f73-90f1-cb7a5d101df1-kube-api-access-mm592\") pod \"controller-manager-879f6c89f-nrvkf\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117233 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-etcd-serving-ca\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117237 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c14c8560-62e3-43b1-9916-f3bbbb9f3fd9-config\") pod \"console-operator-58897d9998-x9kxk\" (UID: \"c14c8560-62e3-43b1-9916-f3bbbb9f3fd9\") " pod="openshift-console-operator/console-operator-58897d9998-x9kxk" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117288 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117317 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/39cd077a-5ec6-43f5-b541-f50be415eca7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g6pfb\" (UID: \"39cd077a-5ec6-43f5-b541-f50be415eca7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117338 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2sj2\" (UniqueName: \"kubernetes.io/projected/39cd077a-5ec6-43f5-b541-f50be415eca7-kube-api-access-s2sj2\") pod \"openshift-config-operator-7777fb866f-g6pfb\" (UID: \"39cd077a-5ec6-43f5-b541-f50be415eca7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117355 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-console-config\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117378 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117396 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/307b87f4-a717-4813-9ef9-4d44ca9e33f5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bqgbx\" (UID: \"307b87f4-a717-4813-9ef9-4d44ca9e33f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bqgbx" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117417 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-serving-cert\") pod \"route-controller-manager-6576b87f9c-7rdqg\" (UID: \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117436 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84-config\") pod \"kube-apiserver-operator-766d6c64bb-9kdrd\" (UID: \"ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9kdrd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117460 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74d1f814-c65c-4ef3-91a2-911e9f23d634-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117487 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e765fa33-9019-4903-9dc2-5ec87e89c0fe-auth-proxy-config\") pod \"machine-approver-56656f9798-99nxn\" (UID: \"e765fa33-9019-4903-9dc2-5ec87e89c0fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117504 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e329501d-6d85-4f84-b9a2-3f29a0f41881-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d52pc\" (UID: \"e329501d-6d85-4f84-b9a2-3f29a0f41881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d52pc" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117523 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-audit-dir\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117544 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3-etcd-service-ca\") pod \"etcd-operator-b45778765-t7kvf\" (UID: \"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117581 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwg2s\" (UniqueName: \"kubernetes.io/projected/7bc96f0c-6550-4a0f-9354-b1c6ae591b75-kube-api-access-wwg2s\") pod \"cluster-image-registry-operator-dc59b4c8b-mwcbv\" (UID: \"7bc96f0c-6550-4a0f-9354-b1c6ae591b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117604 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7bc96f0c-6550-4a0f-9354-b1c6ae591b75-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mwcbv\" (UID: \"7bc96f0c-6550-4a0f-9354-b1c6ae591b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117625 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5948f721-3c3a-4f73-90f1-cb7a5d101df1-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nrvkf\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117641 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/307b87f4-a717-4813-9ef9-4d44ca9e33f5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bqgbx\" (UID: \"307b87f4-a717-4813-9ef9-4d44ca9e33f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bqgbx" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117660 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39cd077a-5ec6-43f5-b541-f50be415eca7-serving-cert\") pod \"openshift-config-operator-7777fb866f-g6pfb\" (UID: \"39cd077a-5ec6-43f5-b541-f50be415eca7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117675 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-serving-cert\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117692 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ktgx\" (UniqueName: \"kubernetes.io/projected/1a1496bf-7dfd-4cc1-867b-6733c9d71779-kube-api-access-8ktgx\") pod \"migrator-59844c95c7-4q4f7\" (UID: \"1a1496bf-7dfd-4cc1-867b-6733c9d71779\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4q4f7" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117709 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117725 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117758 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b7a65dcf-9933-4d66-92c4-e1c9d9e209e9-images\") pod \"machine-api-operator-5694c8668f-sjzgd\" (UID: \"b7a65dcf-9933-4d66-92c4-e1c9d9e209e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjzgd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117777 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3f49713b-0fe7-4b75-aefe-c44cf397b444-apiservice-cert\") pod \"packageserver-d55dfcdfc-pc8qr\" (UID: \"3f49713b-0fe7-4b75-aefe-c44cf397b444\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117795 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c52d9fdd-cc98-4b50-a85f-6d206363fdb3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hcrkr\" (UID: \"c52d9fdd-cc98-4b50-a85f-6d206363fdb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117813 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a9327ea-0e49-46d3-a849-bef3feed4a78-audit-dir\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117830 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117849 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c14c8560-62e3-43b1-9916-f3bbbb9f3fd9-trusted-ca\") pod \"console-operator-58897d9998-x9kxk\" (UID: \"c14c8560-62e3-43b1-9916-f3bbbb9f3fd9\") " pod="openshift-console-operator/console-operator-58897d9998-x9kxk" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117865 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c52d9fdd-cc98-4b50-a85f-6d206363fdb3-proxy-tls\") pod \"machine-config-controller-84d6567774-hcrkr\" (UID: \"c52d9fdd-cc98-4b50-a85f-6d206363fdb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117883 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b7a65dcf-9933-4d66-92c4-e1c9d9e209e9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sjzgd\" (UID: \"b7a65dcf-9933-4d66-92c4-e1c9d9e209e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjzgd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117904 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmzjd\" (UniqueName: \"kubernetes.io/projected/1c4b3948-0466-411a-8180-5755301bd715-kube-api-access-mmzjd\") pod \"authentication-operator-69f744f599-mfrps\" (UID: \"1c4b3948-0466-411a-8180-5755301bd715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117923 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hdszr\" (UID: \"2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdszr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117941 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74d1f814-c65c-4ef3-91a2-911e9f23d634-audit-dir\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117958 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e4e8bcd-d566-43ed-ba1d-e5c367faca7d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xtj2h\" (UID: \"7e4e8bcd-d566-43ed-ba1d-e5c367faca7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xtj2h" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.117981 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np2v8\" (UniqueName: \"kubernetes.io/projected/7e4e8bcd-d566-43ed-ba1d-e5c367faca7d-kube-api-access-np2v8\") pod \"control-plane-machine-set-operator-78cbb6b69f-xtj2h\" (UID: \"7e4e8bcd-d566-43ed-ba1d-e5c367faca7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xtj2h" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118001 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/74d1f814-c65c-4ef3-91a2-911e9f23d634-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118018 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e765fa33-9019-4903-9dc2-5ec87e89c0fe-config\") pod \"machine-approver-56656f9798-99nxn\" (UID: \"e765fa33-9019-4903-9dc2-5ec87e89c0fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118036 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d67a9936-fafd-4a76-aac2-6209c4697007-config\") pod \"service-ca-operator-777779d784-6gsc5\" (UID: \"d67a9936-fafd-4a76-aac2-6209c4697007\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6gsc5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118054 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2a6e4d95-beda-46e5-8030-8f4f590cc22e-stats-auth\") pod \"router-default-5444994796-fn7tq\" (UID: \"2a6e4d95-beda-46e5-8030-8f4f590cc22e\") " pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118069 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcnwb\" (UniqueName: \"kubernetes.io/projected/7a9327ea-0e49-46d3-a849-bef3feed4a78-kube-api-access-rcnwb\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118088 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c4b3948-0466-411a-8180-5755301bd715-service-ca-bundle\") pod \"authentication-operator-69f744f599-mfrps\" (UID: \"1c4b3948-0466-411a-8180-5755301bd715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118104 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ee35d058-0536-49f4-a31f-d19f858f2a37-signing-key\") pod \"service-ca-9c57cc56f-gcqrg\" (UID: \"ee35d058-0536-49f4-a31f-d19f858f2a37\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcqrg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118113 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74d1f814-c65c-4ef3-91a2-911e9f23d634-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118126 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vhhq\" (UniqueName: \"kubernetes.io/projected/9ccaed08-3b4a-4364-a5a3-4c2d456e9358-kube-api-access-8vhhq\") pod \"dns-operator-744455d44c-qrmll\" (UID: \"9ccaed08-3b4a-4364-a5a3-4c2d456e9358\") " pod="openshift-dns-operator/dns-operator-744455d44c-qrmll" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118144 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fxxs\" (UniqueName: \"kubernetes.io/projected/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-kube-api-access-5fxxs\") pod \"collect-profiles-29462610-54bwr\" (UID: \"7a5331ee-46c7-4826-85cf-3c57f25f1d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118165 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-client-ca\") pod \"route-controller-manager-6576b87f9c-7rdqg\" (UID: \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118182 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/727de252-9019-46b7-8b54-25d3c80d5437-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6wjp7\" (UID: \"727de252-9019-46b7-8b54-25d3c80d5437\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6wjp7" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118200 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7a65dcf-9933-4d66-92c4-e1c9d9e209e9-config\") pod \"machine-api-operator-5694c8668f-sjzgd\" (UID: \"b7a65dcf-9933-4d66-92c4-e1c9d9e209e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjzgd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118218 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3-serving-cert\") pod \"etcd-operator-b45778765-t7kvf\" (UID: \"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118235 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-image-import-ca\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118250 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-oauth-serving-cert\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118270 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c4b3948-0466-411a-8180-5755301bd715-config\") pod \"authentication-operator-69f744f599-mfrps\" (UID: \"1c4b3948-0466-411a-8180-5755301bd715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118286 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/74d1f814-c65c-4ef3-91a2-911e9f23d634-etcd-client\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118304 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f28bt\" (UniqueName: \"kubernetes.io/projected/aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e-kube-api-access-f28bt\") pod \"olm-operator-6b444d44fb-2crgz\" (UID: \"aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118912 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e765fa33-9019-4903-9dc2-5ec87e89c0fe-auth-proxy-config\") pod \"machine-approver-56656f9798-99nxn\" (UID: \"e765fa33-9019-4903-9dc2-5ec87e89c0fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118961 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-audit-dir\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.119346 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-config\") pod \"route-controller-manager-6576b87f9c-7rdqg\" (UID: \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.119416 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-node-pullsecrets\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.119542 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3-etcd-service-ca\") pod \"etcd-operator-b45778765-t7kvf\" (UID: \"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.120579 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/39cd077a-5ec6-43f5-b541-f50be415eca7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g6pfb\" (UID: \"39cd077a-5ec6-43f5-b541-f50be415eca7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.121114 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/74d1f814-c65c-4ef3-91a2-911e9f23d634-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.121631 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e765fa33-9019-4903-9dc2-5ec87e89c0fe-config\") pod \"machine-approver-56656f9798-99nxn\" (UID: \"e765fa33-9019-4903-9dc2-5ec87e89c0fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.122152 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-etcd-client\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.122319 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c4b3948-0466-411a-8180-5755301bd715-service-ca-bundle\") pod \"authentication-operator-69f744f599-mfrps\" (UID: \"1c4b3948-0466-411a-8180-5755301bd715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.118104 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c14c8560-62e3-43b1-9916-f3bbbb9f3fd9-config\") pod \"console-operator-58897d9998-x9kxk\" (UID: \"c14c8560-62e3-43b1-9916-f3bbbb9f3fd9\") " pod="openshift-console-operator/console-operator-58897d9998-x9kxk" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.122379 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/063cfd7b-7d93-45bc-a374-99b5e204b200-console-oauth-config\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.123294 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c14c8560-62e3-43b1-9916-f3bbbb9f3fd9-serving-cert\") pod \"console-operator-58897d9998-x9kxk\" (UID: \"c14c8560-62e3-43b1-9916-f3bbbb9f3fd9\") " pod="openshift-console-operator/console-operator-58897d9998-x9kxk" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.122820 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-audit\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.123468 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-client-ca\") pod \"route-controller-manager-6576b87f9c-7rdqg\" (UID: \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.123717 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c4b3948-0466-411a-8180-5755301bd715-serving-cert\") pod \"authentication-operator-69f744f599-mfrps\" (UID: \"1c4b3948-0466-411a-8180-5755301bd715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.123738 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.124106 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74d1f814-c65c-4ef3-91a2-911e9f23d634-audit-policies\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.124231 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74d1f814-c65c-4ef3-91a2-911e9f23d634-audit-dir\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.124427 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3-config\") pod \"etcd-operator-b45778765-t7kvf\" (UID: \"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.124509 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-config\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.124773 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5948f721-3c3a-4f73-90f1-cb7a5d101df1-client-ca\") pod \"controller-manager-879f6c89f-nrvkf\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.124862 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-trusted-ca-bundle\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.125173 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c4b3948-0466-411a-8180-5755301bd715-config\") pod \"authentication-operator-69f744f599-mfrps\" (UID: \"1c4b3948-0466-411a-8180-5755301bd715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.125261 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ccaed08-3b4a-4364-a5a3-4c2d456e9358-metrics-tls\") pod \"dns-operator-744455d44c-qrmll\" (UID: \"9ccaed08-3b4a-4364-a5a3-4c2d456e9358\") " pod="openshift-dns-operator/dns-operator-744455d44c-qrmll" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.125295 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5948f721-3c3a-4f73-90f1-cb7a5d101df1-serving-cert\") pod \"controller-manager-879f6c89f-nrvkf\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.125477 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3-etcd-ca\") pod \"etcd-operator-b45778765-t7kvf\" (UID: \"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.125662 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c4b3948-0466-411a-8180-5755301bd715-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mfrps\" (UID: \"1c4b3948-0466-411a-8180-5755301bd715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.126054 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hdszr\" (UID: \"2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdszr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.126397 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/063cfd7b-7d93-45bc-a374-99b5e204b200-console-serving-cert\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.126545 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b7a65dcf-9933-4d66-92c4-e1c9d9e209e9-images\") pod \"machine-api-operator-5694c8668f-sjzgd\" (UID: \"b7a65dcf-9933-4d66-92c4-e1c9d9e209e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjzgd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.126850 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7a65dcf-9933-4d66-92c4-e1c9d9e209e9-config\") pod \"machine-api-operator-5694c8668f-sjzgd\" (UID: \"b7a65dcf-9933-4d66-92c4-e1c9d9e209e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjzgd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.127086 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-image-import-ca\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.127450 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-oauth-serving-cert\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.127674 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c14c8560-62e3-43b1-9916-f3bbbb9f3fd9-trusted-ca\") pod \"console-operator-58897d9998-x9kxk\" (UID: \"c14c8560-62e3-43b1-9916-f3bbbb9f3fd9\") " pod="openshift-console-operator/console-operator-58897d9998-x9kxk" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.127714 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/74d1f814-c65c-4ef3-91a2-911e9f23d634-etcd-client\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.128008 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5948f721-3c3a-4f73-90f1-cb7a5d101df1-config\") pod \"controller-manager-879f6c89f-nrvkf\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.128498 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39cd077a-5ec6-43f5-b541-f50be415eca7-serving-cert\") pod \"openshift-config-operator-7777fb866f-g6pfb\" (UID: \"39cd077a-5ec6-43f5-b541-f50be415eca7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.128647 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9cd022ff-cd8d-4e6d-8325-491d91146a99-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-frq5q\" (UID: \"9cd022ff-cd8d-4e6d-8325-491d91146a99\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frq5q" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.129421 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3-etcd-client\") pod \"etcd-operator-b45778765-t7kvf\" (UID: \"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.129698 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/74d1f814-c65c-4ef3-91a2-911e9f23d634-encryption-config\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.129726 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b7a65dcf-9933-4d66-92c4-e1c9d9e209e9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sjzgd\" (UID: \"b7a65dcf-9933-4d66-92c4-e1c9d9e209e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjzgd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.129838 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hdszr\" (UID: \"2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdszr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.129863 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3-serving-cert\") pod \"etcd-operator-b45778765-t7kvf\" (UID: \"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.129928 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5948f721-3c3a-4f73-90f1-cb7a5d101df1-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nrvkf\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.130022 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74d1f814-c65c-4ef3-91a2-911e9f23d634-serving-cert\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.130028 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-encryption-config\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.130322 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-serving-cert\") pod \"route-controller-manager-6576b87f9c-7rdqg\" (UID: \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.132428 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-serving-cert\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.132468 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e765fa33-9019-4903-9dc2-5ec87e89c0fe-machine-approver-tls\") pod \"machine-approver-56656f9798-99nxn\" (UID: \"e765fa33-9019-4903-9dc2-5ec87e89c0fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.135910 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-console-config\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.156136 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.174297 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.194037 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.213974 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219496 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwg2s\" (UniqueName: \"kubernetes.io/projected/7bc96f0c-6550-4a0f-9354-b1c6ae591b75-kube-api-access-wwg2s\") pod \"cluster-image-registry-operator-dc59b4c8b-mwcbv\" (UID: \"7bc96f0c-6550-4a0f-9354-b1c6ae591b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219532 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7bc96f0c-6550-4a0f-9354-b1c6ae591b75-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mwcbv\" (UID: \"7bc96f0c-6550-4a0f-9354-b1c6ae591b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219564 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/307b87f4-a717-4813-9ef9-4d44ca9e33f5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bqgbx\" (UID: \"307b87f4-a717-4813-9ef9-4d44ca9e33f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bqgbx" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219584 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ktgx\" (UniqueName: \"kubernetes.io/projected/1a1496bf-7dfd-4cc1-867b-6733c9d71779-kube-api-access-8ktgx\") pod \"migrator-59844c95c7-4q4f7\" (UID: \"1a1496bf-7dfd-4cc1-867b-6733c9d71779\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4q4f7" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219601 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219620 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219640 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a9327ea-0e49-46d3-a849-bef3feed4a78-audit-dir\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219658 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219674 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3f49713b-0fe7-4b75-aefe-c44cf397b444-apiservice-cert\") pod \"packageserver-d55dfcdfc-pc8qr\" (UID: \"3f49713b-0fe7-4b75-aefe-c44cf397b444\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219689 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c52d9fdd-cc98-4b50-a85f-6d206363fdb3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hcrkr\" (UID: \"c52d9fdd-cc98-4b50-a85f-6d206363fdb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219710 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c52d9fdd-cc98-4b50-a85f-6d206363fdb3-proxy-tls\") pod \"machine-config-controller-84d6567774-hcrkr\" (UID: \"c52d9fdd-cc98-4b50-a85f-6d206363fdb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219738 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np2v8\" (UniqueName: \"kubernetes.io/projected/7e4e8bcd-d566-43ed-ba1d-e5c367faca7d-kube-api-access-np2v8\") pod \"control-plane-machine-set-operator-78cbb6b69f-xtj2h\" (UID: \"7e4e8bcd-d566-43ed-ba1d-e5c367faca7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xtj2h" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219759 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e4e8bcd-d566-43ed-ba1d-e5c367faca7d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xtj2h\" (UID: \"7e4e8bcd-d566-43ed-ba1d-e5c367faca7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xtj2h" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219781 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d67a9936-fafd-4a76-aac2-6209c4697007-config\") pod \"service-ca-operator-777779d784-6gsc5\" (UID: \"d67a9936-fafd-4a76-aac2-6209c4697007\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6gsc5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219799 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2a6e4d95-beda-46e5-8030-8f4f590cc22e-stats-auth\") pod \"router-default-5444994796-fn7tq\" (UID: \"2a6e4d95-beda-46e5-8030-8f4f590cc22e\") " pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219815 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcnwb\" (UniqueName: \"kubernetes.io/projected/7a9327ea-0e49-46d3-a849-bef3feed4a78-kube-api-access-rcnwb\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219832 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ee35d058-0536-49f4-a31f-d19f858f2a37-signing-key\") pod \"service-ca-9c57cc56f-gcqrg\" (UID: \"ee35d058-0536-49f4-a31f-d19f858f2a37\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcqrg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219841 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a9327ea-0e49-46d3-a849-bef3feed4a78-audit-dir\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219855 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fxxs\" (UniqueName: \"kubernetes.io/projected/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-kube-api-access-5fxxs\") pod \"collect-profiles-29462610-54bwr\" (UID: \"7a5331ee-46c7-4826-85cf-3c57f25f1d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219927 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/727de252-9019-46b7-8b54-25d3c80d5437-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6wjp7\" (UID: \"727de252-9019-46b7-8b54-25d3c80d5437\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6wjp7" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.219970 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f28bt\" (UniqueName: \"kubernetes.io/projected/aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e-kube-api-access-f28bt\") pod \"olm-operator-6b444d44fb-2crgz\" (UID: \"aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220029 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9287\" (UniqueName: \"kubernetes.io/projected/ee35d058-0536-49f4-a31f-d19f858f2a37-kube-api-access-n9287\") pod \"service-ca-9c57cc56f-gcqrg\" (UID: \"ee35d058-0536-49f4-a31f-d19f858f2a37\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcqrg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220063 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7q47\" (UniqueName: \"kubernetes.io/projected/c52d9fdd-cc98-4b50-a85f-6d206363fdb3-kube-api-access-k7q47\") pod \"machine-config-controller-84d6567774-hcrkr\" (UID: \"c52d9fdd-cc98-4b50-a85f-6d206363fdb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220093 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9kdrd\" (UID: \"ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9kdrd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220126 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220177 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ee35d058-0536-49f4-a31f-d19f858f2a37-signing-cabundle\") pod \"service-ca-9c57cc56f-gcqrg\" (UID: \"ee35d058-0536-49f4-a31f-d19f858f2a37\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcqrg" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220210 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3f49713b-0fe7-4b75-aefe-c44cf397b444-tmpfs\") pod \"packageserver-d55dfcdfc-pc8qr\" (UID: \"3f49713b-0fe7-4b75-aefe-c44cf397b444\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220246 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220400 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k22pv\" (UniqueName: \"kubernetes.io/projected/5c60a3ab-4428-4658-9d0b-5ed1608bd379-kube-api-access-k22pv\") pod \"marketplace-operator-79b997595-hc49h\" (UID: \"5c60a3ab-4428-4658-9d0b-5ed1608bd379\") " pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220459 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220483 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk74b\" (UniqueName: \"kubernetes.io/projected/e6badf96-ed64-4177-9bb3-9cc00ac5ce09-kube-api-access-sk74b\") pod \"ingress-operator-5b745b69d9-q7vwd\" (UID: \"e6badf96-ed64-4177-9bb3-9cc00ac5ce09\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220504 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3f49713b-0fe7-4b75-aefe-c44cf397b444-webhook-cert\") pod \"packageserver-d55dfcdfc-pc8qr\" (UID: \"3f49713b-0fe7-4b75-aefe-c44cf397b444\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220524 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220581 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-audit-policies\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220602 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-secret-volume\") pod \"collect-profiles-29462610-54bwr\" (UID: \"7a5331ee-46c7-4826-85cf-3c57f25f1d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220622 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s42kz\" (UniqueName: \"kubernetes.io/projected/3f49713b-0fe7-4b75-aefe-c44cf397b444-kube-api-access-s42kz\") pod \"packageserver-d55dfcdfc-pc8qr\" (UID: \"3f49713b-0fe7-4b75-aefe-c44cf397b444\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220655 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s94qv\" (UniqueName: \"kubernetes.io/projected/727de252-9019-46b7-8b54-25d3c80d5437-kube-api-access-s94qv\") pod \"openshift-apiserver-operator-796bbdcf4f-6wjp7\" (UID: \"727de252-9019-46b7-8b54-25d3c80d5437\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6wjp7" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220675 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9kdrd\" (UID: \"ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9kdrd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220690 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220697 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6badf96-ed64-4177-9bb3-9cc00ac5ce09-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q7vwd\" (UID: \"e6badf96-ed64-4177-9bb3-9cc00ac5ce09\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220769 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/727de252-9019-46b7-8b54-25d3c80d5437-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6wjp7\" (UID: \"727de252-9019-46b7-8b54-25d3c80d5437\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6wjp7" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220805 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220833 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7bc96f0c-6550-4a0f-9354-b1c6ae591b75-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mwcbv\" (UID: \"7bc96f0c-6550-4a0f-9354-b1c6ae591b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220859 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220896 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx9c9\" (UniqueName: \"kubernetes.io/projected/e329501d-6d85-4f84-b9a2-3f29a0f41881-kube-api-access-dx9c9\") pod \"multus-admission-controller-857f4d67dd-d52pc\" (UID: \"e329501d-6d85-4f84-b9a2-3f29a0f41881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d52pc" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220923 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrc4q\" (UniqueName: \"kubernetes.io/projected/2a6e4d95-beda-46e5-8030-8f4f590cc22e-kube-api-access-qrc4q\") pod \"router-default-5444994796-fn7tq\" (UID: \"2a6e4d95-beda-46e5-8030-8f4f590cc22e\") " pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220945 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2a6e4d95-beda-46e5-8030-8f4f590cc22e-default-certificate\") pod \"router-default-5444994796-fn7tq\" (UID: \"2a6e4d95-beda-46e5-8030-8f4f590cc22e\") " pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220951 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.220968 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2crgz\" (UID: \"aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221014 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e-srv-cert\") pod \"olm-operator-6b444d44fb-2crgz\" (UID: \"aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221037 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6badf96-ed64-4177-9bb3-9cc00ac5ce09-trusted-ca\") pod \"ingress-operator-5b745b69d9-q7vwd\" (UID: \"e6badf96-ed64-4177-9bb3-9cc00ac5ce09\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221081 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d67a9936-fafd-4a76-aac2-6209c4697007-serving-cert\") pod \"service-ca-operator-777779d784-6gsc5\" (UID: \"d67a9936-fafd-4a76-aac2-6209c4697007\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6gsc5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221104 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a6e4d95-beda-46e5-8030-8f4f590cc22e-metrics-certs\") pod \"router-default-5444994796-fn7tq\" (UID: \"2a6e4d95-beda-46e5-8030-8f4f590cc22e\") " pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221128 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5c60a3ab-4428-4658-9d0b-5ed1608bd379-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hc49h\" (UID: \"5c60a3ab-4428-4658-9d0b-5ed1608bd379\") " pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221152 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c52d9fdd-cc98-4b50-a85f-6d206363fdb3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hcrkr\" (UID: \"c52d9fdd-cc98-4b50-a85f-6d206363fdb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221159 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c60a3ab-4428-4658-9d0b-5ed1608bd379-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hc49h\" (UID: \"5c60a3ab-4428-4658-9d0b-5ed1608bd379\") " pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221185 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/307b87f4-a717-4813-9ef9-4d44ca9e33f5-config\") pod \"kube-controller-manager-operator-78b949d7b-bqgbx\" (UID: \"307b87f4-a717-4813-9ef9-4d44ca9e33f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bqgbx" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221209 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlkrt\" (UniqueName: \"kubernetes.io/projected/d67a9936-fafd-4a76-aac2-6209c4697007-kube-api-access-vlkrt\") pod \"service-ca-operator-777779d784-6gsc5\" (UID: \"d67a9936-fafd-4a76-aac2-6209c4697007\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6gsc5" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221235 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a6e4d95-beda-46e5-8030-8f4f590cc22e-service-ca-bundle\") pod \"router-default-5444994796-fn7tq\" (UID: \"2a6e4d95-beda-46e5-8030-8f4f590cc22e\") " pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221261 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7bc96f0c-6550-4a0f-9354-b1c6ae591b75-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mwcbv\" (UID: \"7bc96f0c-6550-4a0f-9354-b1c6ae591b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221291 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-config-volume\") pod \"collect-profiles-29462610-54bwr\" (UID: \"7a5331ee-46c7-4826-85cf-3c57f25f1d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221320 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6badf96-ed64-4177-9bb3-9cc00ac5ce09-metrics-tls\") pod \"ingress-operator-5b745b69d9-q7vwd\" (UID: \"e6badf96-ed64-4177-9bb3-9cc00ac5ce09\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221326 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221328 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/727de252-9019-46b7-8b54-25d3c80d5437-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6wjp7\" (UID: \"727de252-9019-46b7-8b54-25d3c80d5437\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6wjp7" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221353 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221382 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221410 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/307b87f4-a717-4813-9ef9-4d44ca9e33f5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bqgbx\" (UID: \"307b87f4-a717-4813-9ef9-4d44ca9e33f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bqgbx" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221444 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84-config\") pod \"kube-apiserver-operator-766d6c64bb-9kdrd\" (UID: \"ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9kdrd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221471 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e329501d-6d85-4f84-b9a2-3f29a0f41881-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d52pc\" (UID: \"e329501d-6d85-4f84-b9a2-3f29a0f41881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d52pc" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.221765 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3f49713b-0fe7-4b75-aefe-c44cf397b444-tmpfs\") pod \"packageserver-d55dfcdfc-pc8qr\" (UID: \"3f49713b-0fe7-4b75-aefe-c44cf397b444\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.222195 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-audit-policies\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.223180 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7bc96f0c-6550-4a0f-9354-b1c6ae591b75-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mwcbv\" (UID: \"7bc96f0c-6550-4a0f-9354-b1c6ae591b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.223347 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7bc96f0c-6550-4a0f-9354-b1c6ae591b75-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mwcbv\" (UID: \"7bc96f0c-6550-4a0f-9354-b1c6ae591b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.224713 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.224723 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.225763 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.226640 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.226721 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.226812 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.227543 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.229049 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.231096 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/727de252-9019-46b7-8b54-25d3c80d5437-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6wjp7\" (UID: \"727de252-9019-46b7-8b54-25d3c80d5437\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6wjp7" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.234990 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.254205 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.274448 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.294181 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.307179 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6badf96-ed64-4177-9bb3-9cc00ac5ce09-metrics-tls\") pod \"ingress-operator-5b745b69d9-q7vwd\" (UID: \"e6badf96-ed64-4177-9bb3-9cc00ac5ce09\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.314664 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.342873 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.354034 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6badf96-ed64-4177-9bb3-9cc00ac5ce09-trusted-ca\") pod \"ingress-operator-5b745b69d9-q7vwd\" (UID: \"e6badf96-ed64-4177-9bb3-9cc00ac5ce09\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.354618 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.374299 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.383127 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a6e4d95-beda-46e5-8030-8f4f590cc22e-service-ca-bundle\") pod \"router-default-5444994796-fn7tq\" (UID: \"2a6e4d95-beda-46e5-8030-8f4f590cc22e\") " pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.394491 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.406656 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a6e4d95-beda-46e5-8030-8f4f590cc22e-metrics-certs\") pod \"router-default-5444994796-fn7tq\" (UID: \"2a6e4d95-beda-46e5-8030-8f4f590cc22e\") " pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.414768 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.434925 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.455180 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.467285 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2a6e4d95-beda-46e5-8030-8f4f590cc22e-default-certificate\") pod \"router-default-5444994796-fn7tq\" (UID: \"2a6e4d95-beda-46e5-8030-8f4f590cc22e\") " pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.474211 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.495486 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.515375 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.534359 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.546874 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2a6e4d95-beda-46e5-8030-8f4f590cc22e-stats-auth\") pod \"router-default-5444994796-fn7tq\" (UID: \"2a6e4d95-beda-46e5-8030-8f4f590cc22e\") " pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.554257 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.574773 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.595458 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.614855 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.634878 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.645242 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9kdrd\" (UID: \"ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9kdrd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.655176 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.675025 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.684700 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84-config\") pod \"kube-apiserver-operator-766d6c64bb-9kdrd\" (UID: \"ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9kdrd" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.694762 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.714932 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.734926 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.734970 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.735044 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.735210 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.735624 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.754934 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.775331 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.794801 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.814842 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.834530 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.847509 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2crgz\" (UID: \"aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.847857 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-secret-volume\") pod \"collect-profiles-29462610-54bwr\" (UID: \"7a5331ee-46c7-4826-85cf-3c57f25f1d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.855923 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.874794 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.887105 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e-srv-cert\") pod \"olm-operator-6b444d44fb-2crgz\" (UID: \"aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.894770 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.914666 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.924078 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/307b87f4-a717-4813-9ef9-4d44ca9e33f5-config\") pod \"kube-controller-manager-operator-78b949d7b-bqgbx\" (UID: \"307b87f4-a717-4813-9ef9-4d44ca9e33f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bqgbx" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.935324 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.945666 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/307b87f4-a717-4813-9ef9-4d44ca9e33f5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bqgbx\" (UID: \"307b87f4-a717-4813-9ef9-4d44ca9e33f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bqgbx" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.955363 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.982269 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 07 03:34:50 crc kubenswrapper[4980]: I0107 03:34:50.994699 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.014903 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.033542 4980 request.go:700] Waited for 1.00750872s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.035930 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.054886 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.075346 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.095253 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.107062 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ee35d058-0536-49f4-a31f-d19f858f2a37-signing-key\") pod \"service-ca-9c57cc56f-gcqrg\" (UID: \"ee35d058-0536-49f4-a31f-d19f858f2a37\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcqrg" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.115013 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.127301 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d67a9936-fafd-4a76-aac2-6209c4697007-serving-cert\") pod \"service-ca-operator-777779d784-6gsc5\" (UID: \"d67a9936-fafd-4a76-aac2-6209c4697007\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6gsc5" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.135357 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.156512 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.163383 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d67a9936-fafd-4a76-aac2-6209c4697007-config\") pod \"service-ca-operator-777779d784-6gsc5\" (UID: \"d67a9936-fafd-4a76-aac2-6209c4697007\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6gsc5" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.175740 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.195652 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.215156 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 07 03:34:51 crc kubenswrapper[4980]: E0107 03:34:51.220952 4980 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 07 03:34:51 crc kubenswrapper[4980]: E0107 03:34:51.220986 4980 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 07 03:34:51 crc kubenswrapper[4980]: E0107 03:34:51.221059 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f49713b-0fe7-4b75-aefe-c44cf397b444-apiservice-cert podName:3f49713b-0fe7-4b75-aefe-c44cf397b444 nodeName:}" failed. No retries permitted until 2026-01-07 03:34:51.721030938 +0000 UTC m=+138.286725703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/3f49713b-0fe7-4b75-aefe-c44cf397b444-apiservice-cert") pod "packageserver-d55dfcdfc-pc8qr" (UID: "3f49713b-0fe7-4b75-aefe-c44cf397b444") : failed to sync secret cache: timed out waiting for the condition Jan 07 03:34:51 crc kubenswrapper[4980]: E0107 03:34:51.221092 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f49713b-0fe7-4b75-aefe-c44cf397b444-webhook-cert podName:3f49713b-0fe7-4b75-aefe-c44cf397b444 nodeName:}" failed. No retries permitted until 2026-01-07 03:34:51.72107607 +0000 UTC m=+138.286770845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/3f49713b-0fe7-4b75-aefe-c44cf397b444-webhook-cert") pod "packageserver-d55dfcdfc-pc8qr" (UID: "3f49713b-0fe7-4b75-aefe-c44cf397b444") : failed to sync secret cache: timed out waiting for the condition Jan 07 03:34:51 crc kubenswrapper[4980]: E0107 03:34:51.221289 4980 secret.go:188] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 07 03:34:51 crc kubenswrapper[4980]: E0107 03:34:51.221347 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c52d9fdd-cc98-4b50-a85f-6d206363fdb3-proxy-tls podName:c52d9fdd-cc98-4b50-a85f-6d206363fdb3 nodeName:}" failed. No retries permitted until 2026-01-07 03:34:51.721333448 +0000 UTC m=+138.287028213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c52d9fdd-cc98-4b50-a85f-6d206363fdb3-proxy-tls") pod "machine-config-controller-84d6567774-hcrkr" (UID: "c52d9fdd-cc98-4b50-a85f-6d206363fdb3") : failed to sync secret cache: timed out waiting for the condition Jan 07 03:34:51 crc kubenswrapper[4980]: E0107 03:34:51.221403 4980 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 07 03:34:51 crc kubenswrapper[4980]: E0107 03:34:51.221493 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e4e8bcd-d566-43ed-ba1d-e5c367faca7d-control-plane-machine-set-operator-tls podName:7e4e8bcd-d566-43ed-ba1d-e5c367faca7d nodeName:}" failed. No retries permitted until 2026-01-07 03:34:51.721460002 +0000 UTC m=+138.287154767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/7e4e8bcd-d566-43ed-ba1d-e5c367faca7d-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-xtj2h" (UID: "7e4e8bcd-d566-43ed-ba1d-e5c367faca7d") : failed to sync secret cache: timed out waiting for the condition Jan 07 03:34:51 crc kubenswrapper[4980]: E0107 03:34:51.222271 4980 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Jan 07 03:34:51 crc kubenswrapper[4980]: E0107 03:34:51.222325 4980 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 07 03:34:51 crc kubenswrapper[4980]: E0107 03:34:51.222345 4980 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 07 03:34:51 crc kubenswrapper[4980]: E0107 03:34:51.222424 4980 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 07 03:34:51 crc kubenswrapper[4980]: E0107 03:34:51.222543 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e329501d-6d85-4f84-b9a2-3f29a0f41881-webhook-certs podName:e329501d-6d85-4f84-b9a2-3f29a0f41881 nodeName:}" failed. No retries permitted until 2026-01-07 03:34:51.722447731 +0000 UTC m=+138.288142496 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e329501d-6d85-4f84-b9a2-3f29a0f41881-webhook-certs") pod "multus-admission-controller-857f4d67dd-d52pc" (UID: "e329501d-6d85-4f84-b9a2-3f29a0f41881") : failed to sync secret cache: timed out waiting for the condition Jan 07 03:34:51 crc kubenswrapper[4980]: E0107 03:34:51.222675 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-config-volume podName:7a5331ee-46c7-4826-85cf-3c57f25f1d6c nodeName:}" failed. No retries permitted until 2026-01-07 03:34:51.722657198 +0000 UTC m=+138.288351973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-config-volume") pod "collect-profiles-29462610-54bwr" (UID: "7a5331ee-46c7-4826-85cf-3c57f25f1d6c") : failed to sync configmap cache: timed out waiting for the condition Jan 07 03:34:51 crc kubenswrapper[4980]: E0107 03:34:51.222732 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c60a3ab-4428-4658-9d0b-5ed1608bd379-marketplace-operator-metrics podName:5c60a3ab-4428-4658-9d0b-5ed1608bd379 nodeName:}" failed. No retries permitted until 2026-01-07 03:34:51.722718029 +0000 UTC m=+138.288412794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/5c60a3ab-4428-4658-9d0b-5ed1608bd379-marketplace-operator-metrics") pod "marketplace-operator-79b997595-hc49h" (UID: "5c60a3ab-4428-4658-9d0b-5ed1608bd379") : failed to sync secret cache: timed out waiting for the condition Jan 07 03:34:51 crc kubenswrapper[4980]: E0107 03:34:51.222791 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c60a3ab-4428-4658-9d0b-5ed1608bd379-marketplace-trusted-ca podName:5c60a3ab-4428-4658-9d0b-5ed1608bd379 nodeName:}" failed. No retries permitted until 2026-01-07 03:34:51.722775381 +0000 UTC m=+138.288470156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/5c60a3ab-4428-4658-9d0b-5ed1608bd379-marketplace-trusted-ca") pod "marketplace-operator-79b997595-hc49h" (UID: "5c60a3ab-4428-4658-9d0b-5ed1608bd379") : failed to sync configmap cache: timed out waiting for the condition Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.223206 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ee35d058-0536-49f4-a31f-d19f858f2a37-signing-cabundle\") pod \"service-ca-9c57cc56f-gcqrg\" (UID: \"ee35d058-0536-49f4-a31f-d19f858f2a37\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcqrg" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.235078 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.255137 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.274904 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.295399 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.315144 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.335593 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.354922 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.375776 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.395211 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.426045 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.435546 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.455202 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.475760 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.494523 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.517075 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.535131 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.556894 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.575196 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.594532 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.615861 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.635385 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.655367 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.675212 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.714751 4980 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.734999 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.750401 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e329501d-6d85-4f84-b9a2-3f29a0f41881-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d52pc\" (UID: \"e329501d-6d85-4f84-b9a2-3f29a0f41881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d52pc" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.750497 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3f49713b-0fe7-4b75-aefe-c44cf397b444-apiservice-cert\") pod \"packageserver-d55dfcdfc-pc8qr\" (UID: \"3f49713b-0fe7-4b75-aefe-c44cf397b444\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.750537 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c52d9fdd-cc98-4b50-a85f-6d206363fdb3-proxy-tls\") pod \"machine-config-controller-84d6567774-hcrkr\" (UID: \"c52d9fdd-cc98-4b50-a85f-6d206363fdb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.750617 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e4e8bcd-d566-43ed-ba1d-e5c367faca7d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xtj2h\" (UID: \"7e4e8bcd-d566-43ed-ba1d-e5c367faca7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xtj2h" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.750820 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3f49713b-0fe7-4b75-aefe-c44cf397b444-webhook-cert\") pod \"packageserver-d55dfcdfc-pc8qr\" (UID: \"3f49713b-0fe7-4b75-aefe-c44cf397b444\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.751110 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5c60a3ab-4428-4658-9d0b-5ed1608bd379-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hc49h\" (UID: \"5c60a3ab-4428-4658-9d0b-5ed1608bd379\") " pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.751172 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c60a3ab-4428-4658-9d0b-5ed1608bd379-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hc49h\" (UID: \"5c60a3ab-4428-4658-9d0b-5ed1608bd379\") " pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.751227 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-config-volume\") pod \"collect-profiles-29462610-54bwr\" (UID: \"7a5331ee-46c7-4826-85cf-3c57f25f1d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.752604 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-config-volume\") pod \"collect-profiles-29462610-54bwr\" (UID: \"7a5331ee-46c7-4826-85cf-3c57f25f1d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.753201 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c60a3ab-4428-4658-9d0b-5ed1608bd379-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hc49h\" (UID: \"5c60a3ab-4428-4658-9d0b-5ed1608bd379\") " pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.754664 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.757326 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e329501d-6d85-4f84-b9a2-3f29a0f41881-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d52pc\" (UID: \"e329501d-6d85-4f84-b9a2-3f29a0f41881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d52pc" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.757782 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3f49713b-0fe7-4b75-aefe-c44cf397b444-webhook-cert\") pod \"packageserver-d55dfcdfc-pc8qr\" (UID: \"3f49713b-0fe7-4b75-aefe-c44cf397b444\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.757863 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e4e8bcd-d566-43ed-ba1d-e5c367faca7d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xtj2h\" (UID: \"7e4e8bcd-d566-43ed-ba1d-e5c367faca7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xtj2h" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.759719 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3f49713b-0fe7-4b75-aefe-c44cf397b444-apiservice-cert\") pod \"packageserver-d55dfcdfc-pc8qr\" (UID: \"3f49713b-0fe7-4b75-aefe-c44cf397b444\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.762621 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c52d9fdd-cc98-4b50-a85f-6d206363fdb3-proxy-tls\") pod \"machine-config-controller-84d6567774-hcrkr\" (UID: \"c52d9fdd-cc98-4b50-a85f-6d206363fdb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.762832 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5c60a3ab-4428-4658-9d0b-5ed1608bd379-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hc49h\" (UID: \"5c60a3ab-4428-4658-9d0b-5ed1608bd379\") " pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.775427 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.795076 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.814711 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.835873 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.855707 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.875848 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.894677 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.943936 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsjxk\" (UniqueName: \"kubernetes.io/projected/9cd022ff-cd8d-4e6d-8325-491d91146a99-kube-api-access-hsjxk\") pod \"cluster-samples-operator-665b6dd947-frq5q\" (UID: \"9cd022ff-cd8d-4e6d-8325-491d91146a99\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frq5q" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.958277 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frq5q" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.967270 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsbft\" (UniqueName: \"kubernetes.io/projected/063cfd7b-7d93-45bc-a374-99b5e204b200-kube-api-access-xsbft\") pod \"console-f9d7485db-46blp\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:51 crc kubenswrapper[4980]: I0107 03:34:51.983774 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrzkn\" (UniqueName: \"kubernetes.io/projected/c14c8560-62e3-43b1-9916-f3bbbb9f3fd9-kube-api-access-mrzkn\") pod \"console-operator-58897d9998-x9kxk\" (UID: \"c14c8560-62e3-43b1-9916-f3bbbb9f3fd9\") " pod="openshift-console-operator/console-operator-58897d9998-x9kxk" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.006002 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7nf4\" (UniqueName: \"kubernetes.io/projected/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-kube-api-access-d7nf4\") pod \"route-controller-manager-6576b87f9c-7rdqg\" (UID: \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.032871 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w554b\" (UniqueName: \"kubernetes.io/projected/2d9aa4e6-3178-46b8-bb65-ca339c36cef3-kube-api-access-w554b\") pod \"apiserver-76f77b778f-pt6jg\" (UID: \"2d9aa4e6-3178-46b8-bb65-ca339c36cef3\") " pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.042755 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vhhq\" (UniqueName: \"kubernetes.io/projected/9ccaed08-3b4a-4364-a5a3-4c2d456e9358-kube-api-access-8vhhq\") pod \"dns-operator-744455d44c-qrmll\" (UID: \"9ccaed08-3b4a-4364-a5a3-4c2d456e9358\") " pod="openshift-dns-operator/dns-operator-744455d44c-qrmll" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.053473 4980 request.go:700] Waited for 1.932433796s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/serviceaccounts/etcd-operator/token Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.062326 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2sj2\" (UniqueName: \"kubernetes.io/projected/39cd077a-5ec6-43f5-b541-f50be415eca7-kube-api-access-s2sj2\") pod \"openshift-config-operator-7777fb866f-g6pfb\" (UID: \"39cd077a-5ec6-43f5-b541-f50be415eca7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.066243 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.084618 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z45b\" (UniqueName: \"kubernetes.io/projected/aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3-kube-api-access-8z45b\") pod \"etcd-operator-b45778765-t7kvf\" (UID: \"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.098475 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxxvg\" (UniqueName: \"kubernetes.io/projected/74d1f814-c65c-4ef3-91a2-911e9f23d634-kube-api-access-zxxvg\") pod \"apiserver-7bbb656c7d-cxgj5\" (UID: \"74d1f814-c65c-4ef3-91a2-911e9f23d634\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.131468 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.138758 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmqch\" (UniqueName: \"kubernetes.io/projected/2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8-kube-api-access-wmqch\") pod \"openshift-controller-manager-operator-756b6f6bc6-hdszr\" (UID: \"2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdszr" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.142093 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5zbz\" (UniqueName: \"kubernetes.io/projected/b7a65dcf-9933-4d66-92c4-e1c9d9e209e9-kube-api-access-r5zbz\") pod \"machine-api-operator-5694c8668f-sjzgd\" (UID: \"b7a65dcf-9933-4d66-92c4-e1c9d9e209e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjzgd" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.161913 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-x9kxk" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.162595 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmzjd\" (UniqueName: \"kubernetes.io/projected/1c4b3948-0466-411a-8180-5755301bd715-kube-api-access-mmzjd\") pod \"authentication-operator-69f744f599-mfrps\" (UID: \"1c4b3948-0466-411a-8180-5755301bd715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.168891 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdszr" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.173604 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.183314 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.184960 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl64g\" (UniqueName: \"kubernetes.io/projected/e765fa33-9019-4903-9dc2-5ec87e89c0fe-kube-api-access-jl64g\") pod \"machine-approver-56656f9798-99nxn\" (UID: \"e765fa33-9019-4903-9dc2-5ec87e89c0fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.196056 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm592\" (UniqueName: \"kubernetes.io/projected/5948f721-3c3a-4f73-90f1-cb7a5d101df1-kube-api-access-mm592\") pod \"controller-manager-879f6c89f-nrvkf\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.196501 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.207866 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qrmll" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.214071 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.214384 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj6vc\" (UniqueName: \"kubernetes.io/projected/aa545c14-ac24-490a-a488-9d26b26e6ea2-kube-api-access-tj6vc\") pod \"downloads-7954f5f757-hn52l\" (UID: \"aa545c14-ac24-490a-a488-9d26b26e6ea2\") " pod="openshift-console/downloads-7954f5f757-hn52l" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.219712 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.257864 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwg2s\" (UniqueName: \"kubernetes.io/projected/7bc96f0c-6550-4a0f-9354-b1c6ae591b75-kube-api-access-wwg2s\") pod \"cluster-image-registry-operator-dc59b4c8b-mwcbv\" (UID: \"7bc96f0c-6550-4a0f-9354-b1c6ae591b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.259464 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frq5q"] Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.289698 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ktgx\" (UniqueName: \"kubernetes.io/projected/1a1496bf-7dfd-4cc1-867b-6733c9d71779-kube-api-access-8ktgx\") pod \"migrator-59844c95c7-4q4f7\" (UID: \"1a1496bf-7dfd-4cc1-867b-6733c9d71779\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4q4f7" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.307538 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fxxs\" (UniqueName: \"kubernetes.io/projected/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-kube-api-access-5fxxs\") pod \"collect-profiles-29462610-54bwr\" (UID: \"7a5331ee-46c7-4826-85cf-3c57f25f1d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.309617 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7q47\" (UniqueName: \"kubernetes.io/projected/c52d9fdd-cc98-4b50-a85f-6d206363fdb3-kube-api-access-k7q47\") pod \"machine-config-controller-84d6567774-hcrkr\" (UID: \"c52d9fdd-cc98-4b50-a85f-6d206363fdb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.337294 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6badf96-ed64-4177-9bb3-9cc00ac5ce09-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q7vwd\" (UID: \"e6badf96-ed64-4177-9bb3-9cc00ac5ce09\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.347223 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-pt6jg"] Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.353761 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.362374 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk74b\" (UniqueName: \"kubernetes.io/projected/e6badf96-ed64-4177-9bb3-9cc00ac5ce09-kube-api-access-sk74b\") pod \"ingress-operator-5b745b69d9-q7vwd\" (UID: \"e6badf96-ed64-4177-9bb3-9cc00ac5ce09\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.371805 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4q4f7" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.373282 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np2v8\" (UniqueName: \"kubernetes.io/projected/7e4e8bcd-d566-43ed-ba1d-e5c367faca7d-kube-api-access-np2v8\") pod \"control-plane-machine-set-operator-78cbb6b69f-xtj2h\" (UID: \"7e4e8bcd-d566-43ed-ba1d-e5c367faca7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xtj2h" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.390640 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f28bt\" (UniqueName: \"kubernetes.io/projected/aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e-kube-api-access-f28bt\") pod \"olm-operator-6b444d44fb-2crgz\" (UID: \"aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.400220 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5"] Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.408836 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcnwb\" (UniqueName: \"kubernetes.io/projected/7a9327ea-0e49-46d3-a849-bef3feed4a78-kube-api-access-rcnwb\") pod \"oauth-openshift-558db77b4-vtrzt\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.411845 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-sjzgd" Jan 07 03:34:52 crc kubenswrapper[4980]: W0107 03:34:52.426114 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74d1f814_c65c_4ef3_91a2_911e9f23d634.slice/crio-29a5621aac2bbe8f7892065fa9063fc1ba2d2403004c5802cb89d76c718216e0 WatchSource:0}: Error finding container 29a5621aac2bbe8f7892065fa9063fc1ba2d2403004c5802cb89d76c718216e0: Status 404 returned error can't find the container with id 29a5621aac2bbe8f7892065fa9063fc1ba2d2403004c5802cb89d76c718216e0 Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.440285 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9287\" (UniqueName: \"kubernetes.io/projected/ee35d058-0536-49f4-a31f-d19f858f2a37-kube-api-access-n9287\") pod \"service-ca-9c57cc56f-gcqrg\" (UID: \"ee35d058-0536-49f4-a31f-d19f858f2a37\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcqrg" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.440666 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.456687 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s94qv\" (UniqueName: \"kubernetes.io/projected/727de252-9019-46b7-8b54-25d3c80d5437-kube-api-access-s94qv\") pod \"openshift-apiserver-operator-796bbdcf4f-6wjp7\" (UID: \"727de252-9019-46b7-8b54-25d3c80d5437\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6wjp7" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.457059 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.466181 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hn52l" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.480067 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.485141 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s42kz\" (UniqueName: \"kubernetes.io/projected/3f49713b-0fe7-4b75-aefe-c44cf397b444-kube-api-access-s42kz\") pod \"packageserver-d55dfcdfc-pc8qr\" (UID: \"3f49713b-0fe7-4b75-aefe-c44cf397b444\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.486338 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xtj2h" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.493746 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9kdrd\" (UID: \"ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9kdrd" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.505029 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-t7kvf"] Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.511103 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k22pv\" (UniqueName: \"kubernetes.io/projected/5c60a3ab-4428-4658-9d0b-5ed1608bd379-kube-api-access-k22pv\") pod \"marketplace-operator-79b997595-hc49h\" (UID: \"5c60a3ab-4428-4658-9d0b-5ed1608bd379\") " pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.529621 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlkrt\" (UniqueName: \"kubernetes.io/projected/d67a9936-fafd-4a76-aac2-6209c4697007-kube-api-access-vlkrt\") pod \"service-ca-operator-777779d784-6gsc5\" (UID: \"d67a9936-fafd-4a76-aac2-6209c4697007\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6gsc5" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.563996 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.572088 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/307b87f4-a717-4813-9ef9-4d44ca9e33f5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bqgbx\" (UID: \"307b87f4-a717-4813-9ef9-4d44ca9e33f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bqgbx" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.579777 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6wjp7" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.583799 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" event={"ID":"74d1f814-c65c-4ef3-91a2-911e9f23d634","Type":"ContainerStarted","Data":"29a5621aac2bbe8f7892065fa9063fc1ba2d2403004c5802cb89d76c718216e0"} Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.585140 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.590651 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx9c9\" (UniqueName: \"kubernetes.io/projected/e329501d-6d85-4f84-b9a2-3f29a0f41881-kube-api-access-dx9c9\") pod \"multus-admission-controller-857f4d67dd-d52pc\" (UID: \"e329501d-6d85-4f84-b9a2-3f29a0f41881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d52pc" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.594516 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" event={"ID":"2d9aa4e6-3178-46b8-bb65-ca339c36cef3","Type":"ContainerStarted","Data":"926d9c0d3709f190bdd07beb079d521a896c1b7f69f0ec2094194a4cbff40adf"} Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.615283 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrc4q\" (UniqueName: \"kubernetes.io/projected/2a6e4d95-beda-46e5-8030-8f4f590cc22e-kube-api-access-qrc4q\") pod \"router-default-5444994796-fn7tq\" (UID: \"2a6e4d95-beda-46e5-8030-8f4f590cc22e\") " pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.615946 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9kdrd" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.616190 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-x9kxk"] Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.616231 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.634753 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.655026 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.661186 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.674942 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.694338 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.713922 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.715748 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bqgbx" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.728133 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6gsc5" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.734971 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gcqrg" Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.848048 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdszr"] Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.860929 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mfrps"] Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.863247 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb"] Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.875864 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-46blp"] Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.901324 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg"] Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.944525 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qrmll"] Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.950903 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nrvkf"] Jan 07 03:34:52 crc kubenswrapper[4980]: I0107 03:34:52.958277 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4q4f7"] Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.129078 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.129265 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.129373 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-d52pc" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.129858 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.133590 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4318c07-8e55-4555-bebb-297c5bb68e73-bound-sa-token\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.133692 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4318c07-8e55-4555-bebb-297c5bb68e73-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.133854 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4318c07-8e55-4555-bebb-297c5bb68e73-trusted-ca\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: E0107 03:34:53.135081 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:53.635041731 +0000 UTC m=+140.200736686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.134034 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.135227 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4318c07-8e55-4555-bebb-297c5bb68e73-registry-tls\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.135312 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4318c07-8e55-4555-bebb-297c5bb68e73-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.135362 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4318c07-8e55-4555-bebb-297c5bb68e73-registry-certificates\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.135448 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wgss\" (UniqueName: \"kubernetes.io/projected/d4318c07-8e55-4555-bebb-297c5bb68e73-kube-api-access-2wgss\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.136811 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7bc96f0c-6550-4a0f-9354-b1c6ae591b75-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mwcbv\" (UID: \"7bc96f0c-6550-4a0f-9354-b1c6ae591b75\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv" Jan 07 03:34:53 crc kubenswrapper[4980]: W0107 03:34:53.150148 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc14c8560_62e3_43b1_9916_f3bbbb9f3fd9.slice/crio-8988a644a0034915c9a7f1630179364519cbf7f9e86101b8dd6e26d46998e4a4 WatchSource:0}: Error finding container 8988a644a0034915c9a7f1630179364519cbf7f9e86101b8dd6e26d46998e4a4: Status 404 returned error can't find the container with id 8988a644a0034915c9a7f1630179364519cbf7f9e86101b8dd6e26d46998e4a4 Jan 07 03:34:53 crc kubenswrapper[4980]: W0107 03:34:53.152201 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ec52daf_01b4_44ed_9ccb_b6fa03c80ef8.slice/crio-14ecd8f434b3b107bb74b4c102d4e26b9b72e022c4b2aaa8b86a387df1a8e47e WatchSource:0}: Error finding container 14ecd8f434b3b107bb74b4c102d4e26b9b72e022c4b2aaa8b86a387df1a8e47e: Status 404 returned error can't find the container with id 14ecd8f434b3b107bb74b4c102d4e26b9b72e022c4b2aaa8b86a387df1a8e47e Jan 07 03:34:53 crc kubenswrapper[4980]: W0107 03:34:53.155504 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c4b3948_0466_411a_8180_5755301bd715.slice/crio-221711c351dd25c4cce174dd29b520562698b6af14003292ee0cc2860b2533e1 WatchSource:0}: Error finding container 221711c351dd25c4cce174dd29b520562698b6af14003292ee0cc2860b2533e1: Status 404 returned error can't find the container with id 221711c351dd25c4cce174dd29b520562698b6af14003292ee0cc2860b2533e1 Jan 07 03:34:53 crc kubenswrapper[4980]: W0107 03:34:53.157737 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39cd077a_5ec6_43f5_b541_f50be415eca7.slice/crio-fdf1a20778edcf67f80ed0ac290f08cfbbf4ae4c2407c2b9d8585e843ed6d3ee WatchSource:0}: Error finding container fdf1a20778edcf67f80ed0ac290f08cfbbf4ae4c2407c2b9d8585e843ed6d3ee: Status 404 returned error can't find the container with id fdf1a20778edcf67f80ed0ac290f08cfbbf4ae4c2407c2b9d8585e843ed6d3ee Jan 07 03:34:53 crc kubenswrapper[4980]: W0107 03:34:53.162020 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod063cfd7b_7d93_45bc_a374_99b5e204b200.slice/crio-1618668fa804b7685f517c88d2b7f869a363a72299bb206f8e8958310471264a WatchSource:0}: Error finding container 1618668fa804b7685f517c88d2b7f869a363a72299bb206f8e8958310471264a: Status 404 returned error can't find the container with id 1618668fa804b7685f517c88d2b7f869a363a72299bb206f8e8958310471264a Jan 07 03:34:53 crc kubenswrapper[4980]: W0107 03:34:53.165860 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd08997fa_78f4_4c3c_a8f4_86ba967a4f35.slice/crio-cc9baa2683283ad4a43b836d3be83b46b7f9f6a51ddd84b0eab896f7b5929379 WatchSource:0}: Error finding container cc9baa2683283ad4a43b836d3be83b46b7f9f6a51ddd84b0eab896f7b5929379: Status 404 returned error can't find the container with id cc9baa2683283ad4a43b836d3be83b46b7f9f6a51ddd84b0eab896f7b5929379 Jan 07 03:34:53 crc kubenswrapper[4980]: W0107 03:34:53.170043 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5948f721_3c3a_4f73_90f1_cb7a5d101df1.slice/crio-4d2c6315f2838c988b7cb9414318f7ace1b5703fafa58bd55ac47a0f3cd66856 WatchSource:0}: Error finding container 4d2c6315f2838c988b7cb9414318f7ace1b5703fafa58bd55ac47a0f3cd66856: Status 404 returned error can't find the container with id 4d2c6315f2838c988b7cb9414318f7ace1b5703fafa58bd55ac47a0f3cd66856 Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.170882 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv" Jan 07 03:34:53 crc kubenswrapper[4980]: W0107 03:34:53.175111 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ccaed08_3b4a_4364_a5a3_4c2d456e9358.slice/crio-62a0af56c97782d296483da0d8c877d84282b294bd5ecf17de0ded36a5be1d2a WatchSource:0}: Error finding container 62a0af56c97782d296483da0d8c877d84282b294bd5ecf17de0ded36a5be1d2a: Status 404 returned error can't find the container with id 62a0af56c97782d296483da0d8c877d84282b294bd5ecf17de0ded36a5be1d2a Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.237265 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:53 crc kubenswrapper[4980]: E0107 03:34:53.237729 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:53.737607758 +0000 UTC m=+140.303302523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.250681 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4318c07-8e55-4555-bebb-297c5bb68e73-trusted-ca\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.250760 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e80a9760-331b-4b67-b194-b397e7a692c0-images\") pod \"machine-config-operator-74547568cd-q46ws\" (UID: \"e80a9760-331b-4b67-b194-b397e7a692c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.251843 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4318c07-8e55-4555-bebb-297c5bb68e73-trusted-ca\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.251899 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b31d370-2ed6-406a-b92f-a6c74386d4c1-config-volume\") pod \"dns-default-xqhnv\" (UID: \"1b31d370-2ed6-406a-b92f-a6c74386d4c1\") " pod="openshift-dns/dns-default-xqhnv" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.252011 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-485fp\" (UniqueName: \"kubernetes.io/projected/e80a9760-331b-4b67-b194-b397e7a692c0-kube-api-access-485fp\") pod \"machine-config-operator-74547568cd-q46ws\" (UID: \"e80a9760-331b-4b67-b194-b397e7a692c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.252056 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cce4a570-94cd-4548-9682-7d69c980686a-registration-dir\") pod \"csi-hostpathplugin-6nqsq\" (UID: \"cce4a570-94cd-4548-9682-7d69c980686a\") " pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.252088 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvcc7\" (UniqueName: \"kubernetes.io/projected/cce4a570-94cd-4548-9682-7d69c980686a-kube-api-access-pvcc7\") pod \"csi-hostpathplugin-6nqsq\" (UID: \"cce4a570-94cd-4548-9682-7d69c980686a\") " pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.252253 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ddb847a-2850-4c75-ad28-5cf2cff5ccf8-srv-cert\") pod \"catalog-operator-68c6474976-4rzfj\" (UID: \"7ddb847a-2850-4c75-ad28-5cf2cff5ccf8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.252339 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h79vk\" (UniqueName: \"kubernetes.io/projected/87de8454-433f-4bf0-adb3-315352ae6312-kube-api-access-h79vk\") pod \"package-server-manager-789f6589d5-gx94v\" (UID: \"87de8454-433f-4bf0-adb3-315352ae6312\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gx94v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.252505 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b31d370-2ed6-406a-b92f-a6c74386d4c1-metrics-tls\") pod \"dns-default-xqhnv\" (UID: \"1b31d370-2ed6-406a-b92f-a6c74386d4c1\") " pod="openshift-dns/dns-default-xqhnv" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.252609 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2b5h\" (UniqueName: \"kubernetes.io/projected/9f6faf11-614c-4e7e-99c0-7a4f7b62f523-kube-api-access-z2b5h\") pod \"machine-config-server-9pdgk\" (UID: \"9f6faf11-614c-4e7e-99c0-7a4f7b62f523\") " pod="openshift-machine-config-operator/machine-config-server-9pdgk" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.253419 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9f6faf11-614c-4e7e-99c0-7a4f7b62f523-certs\") pod \"machine-config-server-9pdgk\" (UID: \"9f6faf11-614c-4e7e-99c0-7a4f7b62f523\") " pod="openshift-machine-config-operator/machine-config-server-9pdgk" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.253458 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69zfs\" (UniqueName: \"kubernetes.io/projected/aba93425-6143-4f96-aff6-178f1f1fb3ac-kube-api-access-69zfs\") pod \"kube-storage-version-migrator-operator-b67b599dd-79w2t\" (UID: \"aba93425-6143-4f96-aff6-178f1f1fb3ac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79w2t" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.254260 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.254359 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsj6t\" (UniqueName: \"kubernetes.io/projected/1b31d370-2ed6-406a-b92f-a6c74386d4c1-kube-api-access-tsj6t\") pod \"dns-default-xqhnv\" (UID: \"1b31d370-2ed6-406a-b92f-a6c74386d4c1\") " pod="openshift-dns/dns-default-xqhnv" Jan 07 03:34:53 crc kubenswrapper[4980]: E0107 03:34:53.254658 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:53.754640268 +0000 UTC m=+140.320335223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.254786 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cce4a570-94cd-4548-9682-7d69c980686a-plugins-dir\") pod \"csi-hostpathplugin-6nqsq\" (UID: \"cce4a570-94cd-4548-9682-7d69c980686a\") " pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.254853 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aba93425-6143-4f96-aff6-178f1f1fb3ac-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-79w2t\" (UID: \"aba93425-6143-4f96-aff6-178f1f1fb3ac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79w2t" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.255067 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4318c07-8e55-4555-bebb-297c5bb68e73-registry-tls\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.255114 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx6h5\" (UniqueName: \"kubernetes.io/projected/735bc30f-f25f-4a6f-a9f5-f22a71c5344b-kube-api-access-bx6h5\") pod \"ingress-canary-fl82f\" (UID: \"735bc30f-f25f-4a6f-a9f5-f22a71c5344b\") " pod="openshift-ingress-canary/ingress-canary-fl82f" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.255193 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4318c07-8e55-4555-bebb-297c5bb68e73-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.255644 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4318c07-8e55-4555-bebb-297c5bb68e73-registry-certificates\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.256084 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wgss\" (UniqueName: \"kubernetes.io/projected/d4318c07-8e55-4555-bebb-297c5bb68e73-kube-api-access-2wgss\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.256194 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cce4a570-94cd-4548-9682-7d69c980686a-csi-data-dir\") pod \"csi-hostpathplugin-6nqsq\" (UID: \"cce4a570-94cd-4548-9682-7d69c980686a\") " pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.256355 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/87de8454-433f-4bf0-adb3-315352ae6312-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gx94v\" (UID: \"87de8454-433f-4bf0-adb3-315352ae6312\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gx94v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.256866 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4318c07-8e55-4555-bebb-297c5bb68e73-registry-certificates\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.257166 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ad664a2-98ab-4ceb-9405-377d259db0f3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-znb5v\" (UID: \"4ad664a2-98ab-4ceb-9405-377d259db0f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-znb5v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.257296 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ad664a2-98ab-4ceb-9405-377d259db0f3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-znb5v\" (UID: \"4ad664a2-98ab-4ceb-9405-377d259db0f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-znb5v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.257510 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba93425-6143-4f96-aff6-178f1f1fb3ac-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-79w2t\" (UID: \"aba93425-6143-4f96-aff6-178f1f1fb3ac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79w2t" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.258689 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e80a9760-331b-4b67-b194-b397e7a692c0-proxy-tls\") pod \"machine-config-operator-74547568cd-q46ws\" (UID: \"e80a9760-331b-4b67-b194-b397e7a692c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.258983 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/735bc30f-f25f-4a6f-a9f5-f22a71c5344b-cert\") pod \"ingress-canary-fl82f\" (UID: \"735bc30f-f25f-4a6f-a9f5-f22a71c5344b\") " pod="openshift-ingress-canary/ingress-canary-fl82f" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.259082 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4318c07-8e55-4555-bebb-297c5bb68e73-bound-sa-token\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.259229 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9f6faf11-614c-4e7e-99c0-7a4f7b62f523-node-bootstrap-token\") pod \"machine-config-server-9pdgk\" (UID: \"9f6faf11-614c-4e7e-99c0-7a4f7b62f523\") " pod="openshift-machine-config-operator/machine-config-server-9pdgk" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.259696 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e80a9760-331b-4b67-b194-b397e7a692c0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-q46ws\" (UID: \"e80a9760-331b-4b67-b194-b397e7a692c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.260426 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhqgz\" (UniqueName: \"kubernetes.io/projected/7ddb847a-2850-4c75-ad28-5cf2cff5ccf8-kube-api-access-nhqgz\") pod \"catalog-operator-68c6474976-4rzfj\" (UID: \"7ddb847a-2850-4c75-ad28-5cf2cff5ccf8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.260638 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4318c07-8e55-4555-bebb-297c5bb68e73-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.260899 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cce4a570-94cd-4548-9682-7d69c980686a-socket-dir\") pod \"csi-hostpathplugin-6nqsq\" (UID: \"cce4a570-94cd-4548-9682-7d69c980686a\") " pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.261183 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7ddb847a-2850-4c75-ad28-5cf2cff5ccf8-profile-collector-cert\") pod \"catalog-operator-68c6474976-4rzfj\" (UID: \"7ddb847a-2850-4c75-ad28-5cf2cff5ccf8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.261217 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cce4a570-94cd-4548-9682-7d69c980686a-mountpoint-dir\") pod \"csi-hostpathplugin-6nqsq\" (UID: \"cce4a570-94cd-4548-9682-7d69c980686a\") " pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.261278 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ad664a2-98ab-4ceb-9405-377d259db0f3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-znb5v\" (UID: \"4ad664a2-98ab-4ceb-9405-377d259db0f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-znb5v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.261669 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4318c07-8e55-4555-bebb-297c5bb68e73-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.264966 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4318c07-8e55-4555-bebb-297c5bb68e73-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.266907 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4318c07-8e55-4555-bebb-297c5bb68e73-registry-tls\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.294400 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wgss\" (UniqueName: \"kubernetes.io/projected/d4318c07-8e55-4555-bebb-297c5bb68e73-kube-api-access-2wgss\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.330061 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4318c07-8e55-4555-bebb-297c5bb68e73-bound-sa-token\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.361922 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362269 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e80a9760-331b-4b67-b194-b397e7a692c0-proxy-tls\") pod \"machine-config-operator-74547568cd-q46ws\" (UID: \"e80a9760-331b-4b67-b194-b397e7a692c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362297 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/735bc30f-f25f-4a6f-a9f5-f22a71c5344b-cert\") pod \"ingress-canary-fl82f\" (UID: \"735bc30f-f25f-4a6f-a9f5-f22a71c5344b\") " pod="openshift-ingress-canary/ingress-canary-fl82f" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362332 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9f6faf11-614c-4e7e-99c0-7a4f7b62f523-node-bootstrap-token\") pod \"machine-config-server-9pdgk\" (UID: \"9f6faf11-614c-4e7e-99c0-7a4f7b62f523\") " pod="openshift-machine-config-operator/machine-config-server-9pdgk" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362362 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e80a9760-331b-4b67-b194-b397e7a692c0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-q46ws\" (UID: \"e80a9760-331b-4b67-b194-b397e7a692c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362391 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhqgz\" (UniqueName: \"kubernetes.io/projected/7ddb847a-2850-4c75-ad28-5cf2cff5ccf8-kube-api-access-nhqgz\") pod \"catalog-operator-68c6474976-4rzfj\" (UID: \"7ddb847a-2850-4c75-ad28-5cf2cff5ccf8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362416 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cce4a570-94cd-4548-9682-7d69c980686a-socket-dir\") pod \"csi-hostpathplugin-6nqsq\" (UID: \"cce4a570-94cd-4548-9682-7d69c980686a\") " pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362438 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7ddb847a-2850-4c75-ad28-5cf2cff5ccf8-profile-collector-cert\") pod \"catalog-operator-68c6474976-4rzfj\" (UID: \"7ddb847a-2850-4c75-ad28-5cf2cff5ccf8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362452 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cce4a570-94cd-4548-9682-7d69c980686a-mountpoint-dir\") pod \"csi-hostpathplugin-6nqsq\" (UID: \"cce4a570-94cd-4548-9682-7d69c980686a\") " pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362470 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ad664a2-98ab-4ceb-9405-377d259db0f3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-znb5v\" (UID: \"4ad664a2-98ab-4ceb-9405-377d259db0f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-znb5v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362499 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e80a9760-331b-4b67-b194-b397e7a692c0-images\") pod \"machine-config-operator-74547568cd-q46ws\" (UID: \"e80a9760-331b-4b67-b194-b397e7a692c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362518 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b31d370-2ed6-406a-b92f-a6c74386d4c1-config-volume\") pod \"dns-default-xqhnv\" (UID: \"1b31d370-2ed6-406a-b92f-a6c74386d4c1\") " pod="openshift-dns/dns-default-xqhnv" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362536 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-485fp\" (UniqueName: \"kubernetes.io/projected/e80a9760-331b-4b67-b194-b397e7a692c0-kube-api-access-485fp\") pod \"machine-config-operator-74547568cd-q46ws\" (UID: \"e80a9760-331b-4b67-b194-b397e7a692c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362577 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cce4a570-94cd-4548-9682-7d69c980686a-registration-dir\") pod \"csi-hostpathplugin-6nqsq\" (UID: \"cce4a570-94cd-4548-9682-7d69c980686a\") " pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362602 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvcc7\" (UniqueName: \"kubernetes.io/projected/cce4a570-94cd-4548-9682-7d69c980686a-kube-api-access-pvcc7\") pod \"csi-hostpathplugin-6nqsq\" (UID: \"cce4a570-94cd-4548-9682-7d69c980686a\") " pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362621 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ddb847a-2850-4c75-ad28-5cf2cff5ccf8-srv-cert\") pod \"catalog-operator-68c6474976-4rzfj\" (UID: \"7ddb847a-2850-4c75-ad28-5cf2cff5ccf8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362640 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h79vk\" (UniqueName: \"kubernetes.io/projected/87de8454-433f-4bf0-adb3-315352ae6312-kube-api-access-h79vk\") pod \"package-server-manager-789f6589d5-gx94v\" (UID: \"87de8454-433f-4bf0-adb3-315352ae6312\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gx94v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362668 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b31d370-2ed6-406a-b92f-a6c74386d4c1-metrics-tls\") pod \"dns-default-xqhnv\" (UID: \"1b31d370-2ed6-406a-b92f-a6c74386d4c1\") " pod="openshift-dns/dns-default-xqhnv" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362688 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2b5h\" (UniqueName: \"kubernetes.io/projected/9f6faf11-614c-4e7e-99c0-7a4f7b62f523-kube-api-access-z2b5h\") pod \"machine-config-server-9pdgk\" (UID: \"9f6faf11-614c-4e7e-99c0-7a4f7b62f523\") " pod="openshift-machine-config-operator/machine-config-server-9pdgk" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362930 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9f6faf11-614c-4e7e-99c0-7a4f7b62f523-certs\") pod \"machine-config-server-9pdgk\" (UID: \"9f6faf11-614c-4e7e-99c0-7a4f7b62f523\") " pod="openshift-machine-config-operator/machine-config-server-9pdgk" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362952 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69zfs\" (UniqueName: \"kubernetes.io/projected/aba93425-6143-4f96-aff6-178f1f1fb3ac-kube-api-access-69zfs\") pod \"kube-storage-version-migrator-operator-b67b599dd-79w2t\" (UID: \"aba93425-6143-4f96-aff6-178f1f1fb3ac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79w2t" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.362987 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsj6t\" (UniqueName: \"kubernetes.io/projected/1b31d370-2ed6-406a-b92f-a6c74386d4c1-kube-api-access-tsj6t\") pod \"dns-default-xqhnv\" (UID: \"1b31d370-2ed6-406a-b92f-a6c74386d4c1\") " pod="openshift-dns/dns-default-xqhnv" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.363015 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cce4a570-94cd-4548-9682-7d69c980686a-plugins-dir\") pod \"csi-hostpathplugin-6nqsq\" (UID: \"cce4a570-94cd-4548-9682-7d69c980686a\") " pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.363036 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aba93425-6143-4f96-aff6-178f1f1fb3ac-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-79w2t\" (UID: \"aba93425-6143-4f96-aff6-178f1f1fb3ac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79w2t" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.363061 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx6h5\" (UniqueName: \"kubernetes.io/projected/735bc30f-f25f-4a6f-a9f5-f22a71c5344b-kube-api-access-bx6h5\") pod \"ingress-canary-fl82f\" (UID: \"735bc30f-f25f-4a6f-a9f5-f22a71c5344b\") " pod="openshift-ingress-canary/ingress-canary-fl82f" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.363105 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cce4a570-94cd-4548-9682-7d69c980686a-csi-data-dir\") pod \"csi-hostpathplugin-6nqsq\" (UID: \"cce4a570-94cd-4548-9682-7d69c980686a\") " pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.363129 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/87de8454-433f-4bf0-adb3-315352ae6312-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gx94v\" (UID: \"87de8454-433f-4bf0-adb3-315352ae6312\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gx94v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.363153 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ad664a2-98ab-4ceb-9405-377d259db0f3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-znb5v\" (UID: \"4ad664a2-98ab-4ceb-9405-377d259db0f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-znb5v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.363174 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ad664a2-98ab-4ceb-9405-377d259db0f3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-znb5v\" (UID: \"4ad664a2-98ab-4ceb-9405-377d259db0f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-znb5v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.363196 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba93425-6143-4f96-aff6-178f1f1fb3ac-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-79w2t\" (UID: \"aba93425-6143-4f96-aff6-178f1f1fb3ac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79w2t" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.363982 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba93425-6143-4f96-aff6-178f1f1fb3ac-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-79w2t\" (UID: \"aba93425-6143-4f96-aff6-178f1f1fb3ac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79w2t" Jan 07 03:34:53 crc kubenswrapper[4980]: E0107 03:34:53.364078 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:53.864061604 +0000 UTC m=+140.429756339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.369019 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cce4a570-94cd-4548-9682-7d69c980686a-plugins-dir\") pod \"csi-hostpathplugin-6nqsq\" (UID: \"cce4a570-94cd-4548-9682-7d69c980686a\") " pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.369173 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cce4a570-94cd-4548-9682-7d69c980686a-socket-dir\") pod \"csi-hostpathplugin-6nqsq\" (UID: \"cce4a570-94cd-4548-9682-7d69c980686a\") " pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.369400 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e80a9760-331b-4b67-b194-b397e7a692c0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-q46ws\" (UID: \"e80a9760-331b-4b67-b194-b397e7a692c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.369634 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cce4a570-94cd-4548-9682-7d69c980686a-csi-data-dir\") pod \"csi-hostpathplugin-6nqsq\" (UID: \"cce4a570-94cd-4548-9682-7d69c980686a\") " pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.371504 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cce4a570-94cd-4548-9682-7d69c980686a-mountpoint-dir\") pod \"csi-hostpathplugin-6nqsq\" (UID: \"cce4a570-94cd-4548-9682-7d69c980686a\") " pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.371964 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cce4a570-94cd-4548-9682-7d69c980686a-registration-dir\") pod \"csi-hostpathplugin-6nqsq\" (UID: \"cce4a570-94cd-4548-9682-7d69c980686a\") " pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.372302 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ad664a2-98ab-4ceb-9405-377d259db0f3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-znb5v\" (UID: \"4ad664a2-98ab-4ceb-9405-377d259db0f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-znb5v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.372381 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b31d370-2ed6-406a-b92f-a6c74386d4c1-config-volume\") pod \"dns-default-xqhnv\" (UID: \"1b31d370-2ed6-406a-b92f-a6c74386d4c1\") " pod="openshift-dns/dns-default-xqhnv" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.372605 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/87de8454-433f-4bf0-adb3-315352ae6312-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gx94v\" (UID: \"87de8454-433f-4bf0-adb3-315352ae6312\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gx94v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.372639 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ddb847a-2850-4c75-ad28-5cf2cff5ccf8-srv-cert\") pod \"catalog-operator-68c6474976-4rzfj\" (UID: \"7ddb847a-2850-4c75-ad28-5cf2cff5ccf8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.373387 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ad664a2-98ab-4ceb-9405-377d259db0f3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-znb5v\" (UID: \"4ad664a2-98ab-4ceb-9405-377d259db0f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-znb5v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.377047 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b31d370-2ed6-406a-b92f-a6c74386d4c1-metrics-tls\") pod \"dns-default-xqhnv\" (UID: \"1b31d370-2ed6-406a-b92f-a6c74386d4c1\") " pod="openshift-dns/dns-default-xqhnv" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.377106 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7ddb847a-2850-4c75-ad28-5cf2cff5ccf8-profile-collector-cert\") pod \"catalog-operator-68c6474976-4rzfj\" (UID: \"7ddb847a-2850-4c75-ad28-5cf2cff5ccf8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.397595 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvcc7\" (UniqueName: \"kubernetes.io/projected/cce4a570-94cd-4548-9682-7d69c980686a-kube-api-access-pvcc7\") pod \"csi-hostpathplugin-6nqsq\" (UID: \"cce4a570-94cd-4548-9682-7d69c980686a\") " pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.400775 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e80a9760-331b-4b67-b194-b397e7a692c0-proxy-tls\") pod \"machine-config-operator-74547568cd-q46ws\" (UID: \"e80a9760-331b-4b67-b194-b397e7a692c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.401132 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e80a9760-331b-4b67-b194-b397e7a692c0-images\") pod \"machine-config-operator-74547568cd-q46ws\" (UID: \"e80a9760-331b-4b67-b194-b397e7a692c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.402366 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/735bc30f-f25f-4a6f-a9f5-f22a71c5344b-cert\") pod \"ingress-canary-fl82f\" (UID: \"735bc30f-f25f-4a6f-a9f5-f22a71c5344b\") " pod="openshift-ingress-canary/ingress-canary-fl82f" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.404113 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9f6faf11-614c-4e7e-99c0-7a4f7b62f523-node-bootstrap-token\") pod \"machine-config-server-9pdgk\" (UID: \"9f6faf11-614c-4e7e-99c0-7a4f7b62f523\") " pod="openshift-machine-config-operator/machine-config-server-9pdgk" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.405199 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aba93425-6143-4f96-aff6-178f1f1fb3ac-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-79w2t\" (UID: \"aba93425-6143-4f96-aff6-178f1f1fb3ac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79w2t" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.410025 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.411501 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9f6faf11-614c-4e7e-99c0-7a4f7b62f523-certs\") pod \"machine-config-server-9pdgk\" (UID: \"9f6faf11-614c-4e7e-99c0-7a4f7b62f523\") " pod="openshift-machine-config-operator/machine-config-server-9pdgk" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.412137 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ad664a2-98ab-4ceb-9405-377d259db0f3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-znb5v\" (UID: \"4ad664a2-98ab-4ceb-9405-377d259db0f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-znb5v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.435623 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsj6t\" (UniqueName: \"kubernetes.io/projected/1b31d370-2ed6-406a-b92f-a6c74386d4c1-kube-api-access-tsj6t\") pod \"dns-default-xqhnv\" (UID: \"1b31d370-2ed6-406a-b92f-a6c74386d4c1\") " pod="openshift-dns/dns-default-xqhnv" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.450685 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhqgz\" (UniqueName: \"kubernetes.io/projected/7ddb847a-2850-4c75-ad28-5cf2cff5ccf8-kube-api-access-nhqgz\") pod \"catalog-operator-68c6474976-4rzfj\" (UID: \"7ddb847a-2850-4c75-ad28-5cf2cff5ccf8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.466696 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: E0107 03:34:53.467269 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:53.96725341 +0000 UTC m=+140.532948145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.471154 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx6h5\" (UniqueName: \"kubernetes.io/projected/735bc30f-f25f-4a6f-a9f5-f22a71c5344b-kube-api-access-bx6h5\") pod \"ingress-canary-fl82f\" (UID: \"735bc30f-f25f-4a6f-a9f5-f22a71c5344b\") " pod="openshift-ingress-canary/ingress-canary-fl82f" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.483881 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sjzgd"] Jan 07 03:34:53 crc kubenswrapper[4980]: W0107 03:34:53.500267 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a6e4d95_beda_46e5_8030_8f4f590cc22e.slice/crio-98599d2e9d2776b6ec0330ae6edb69635a3c0ae81d0d77cb8d156b5a11909481 WatchSource:0}: Error finding container 98599d2e9d2776b6ec0330ae6edb69635a3c0ae81d0d77cb8d156b5a11909481: Status 404 returned error can't find the container with id 98599d2e9d2776b6ec0330ae6edb69635a3c0ae81d0d77cb8d156b5a11909481 Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.514581 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz"] Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.519351 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2b5h\" (UniqueName: \"kubernetes.io/projected/9f6faf11-614c-4e7e-99c0-7a4f7b62f523-kube-api-access-z2b5h\") pod \"machine-config-server-9pdgk\" (UID: \"9f6faf11-614c-4e7e-99c0-7a4f7b62f523\") " pod="openshift-machine-config-operator/machine-config-server-9pdgk" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.530062 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h79vk\" (UniqueName: \"kubernetes.io/projected/87de8454-433f-4bf0-adb3-315352ae6312-kube-api-access-h79vk\") pod \"package-server-manager-789f6589d5-gx94v\" (UID: \"87de8454-433f-4bf0-adb3-315352ae6312\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gx94v" Jan 07 03:34:53 crc kubenswrapper[4980]: W0107 03:34:53.537392 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7a65dcf_9933_4d66_92c4_e1c9d9e209e9.slice/crio-df9555e305a972ae135c7c0fae85d2bbaf4738f71d58a7e4125da6eb8230f3f2 WatchSource:0}: Error finding container df9555e305a972ae135c7c0fae85d2bbaf4738f71d58a7e4125da6eb8230f3f2: Status 404 returned error can't find the container with id df9555e305a972ae135c7c0fae85d2bbaf4738f71d58a7e4125da6eb8230f3f2 Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.547227 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-znb5v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.554080 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.555098 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-485fp\" (UniqueName: \"kubernetes.io/projected/e80a9760-331b-4b67-b194-b397e7a692c0-kube-api-access-485fp\") pod \"machine-config-operator-74547568cd-q46ws\" (UID: \"e80a9760-331b-4b67-b194-b397e7a692c0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.567668 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:53 crc kubenswrapper[4980]: E0107 03:34:53.567868 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:54.067830127 +0000 UTC m=+140.633524872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.568029 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: E0107 03:34:53.568426 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:54.068409824 +0000 UTC m=+140.634104559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.571675 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69zfs\" (UniqueName: \"kubernetes.io/projected/aba93425-6143-4f96-aff6-178f1f1fb3ac-kube-api-access-69zfs\") pod \"kube-storage-version-migrator-operator-b67b599dd-79w2t\" (UID: \"aba93425-6143-4f96-aff6-178f1f1fb3ac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79w2t" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.573813 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gx94v" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.608488 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" event={"ID":"d08997fa-78f4-4c3c-a8f4-86ba967a4f35","Type":"ContainerStarted","Data":"cc9baa2683283ad4a43b836d3be83b46b7f9f6a51ddd84b0eab896f7b5929379"} Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.618071 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qrmll" event={"ID":"9ccaed08-3b4a-4364-a5a3-4c2d456e9358","Type":"ContainerStarted","Data":"62a0af56c97782d296483da0d8c877d84282b294bd5ecf17de0ded36a5be1d2a"} Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.623658 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz" event={"ID":"aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e","Type":"ContainerStarted","Data":"e4668fe392e545d7d5728dada045e3b8cdd803f9a394014b092fda653bf8c767"} Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.625340 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" event={"ID":"5948f721-3c3a-4f73-90f1-cb7a5d101df1","Type":"ContainerStarted","Data":"4d2c6315f2838c988b7cb9414318f7ace1b5703fafa58bd55ac47a0f3cd66856"} Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.626864 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdszr" event={"ID":"2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8","Type":"ContainerStarted","Data":"14ecd8f434b3b107bb74b4c102d4e26b9b72e022c4b2aaa8b86a387df1a8e47e"} Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.635502 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fn7tq" event={"ID":"2a6e4d95-beda-46e5-8030-8f4f590cc22e","Type":"ContainerStarted","Data":"98599d2e9d2776b6ec0330ae6edb69635a3c0ae81d0d77cb8d156b5a11909481"} Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.645928 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79w2t" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.661362 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4q4f7" event={"ID":"1a1496bf-7dfd-4cc1-867b-6733c9d71779","Type":"ContainerStarted","Data":"8c83441d3831cb81497225b44818d39425803f4adcd8b2d6f8e1e47686472bbb"} Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.663305 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb" event={"ID":"39cd077a-5ec6-43f5-b541-f50be415eca7","Type":"ContainerStarted","Data":"fdf1a20778edcf67f80ed0ac290f08cfbbf4ae4c2407c2b9d8585e843ed6d3ee"} Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.668814 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:53 crc kubenswrapper[4980]: E0107 03:34:53.668964 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:54.168938118 +0000 UTC m=+140.734632853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.669030 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: E0107 03:34:53.669332 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:54.16932447 +0000 UTC m=+140.735019205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.669375 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-x9kxk" event={"ID":"c14c8560-62e3-43b1-9916-f3bbbb9f3fd9","Type":"ContainerStarted","Data":"8988a644a0034915c9a7f1630179364519cbf7f9e86101b8dd6e26d46998e4a4"} Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.677940 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sjzgd" event={"ID":"b7a65dcf-9933-4d66-92c4-e1c9d9e209e9","Type":"ContainerStarted","Data":"df9555e305a972ae135c7c0fae85d2bbaf4738f71d58a7e4125da6eb8230f3f2"} Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.682499 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frq5q" event={"ID":"9cd022ff-cd8d-4e6d-8325-491d91146a99","Type":"ContainerStarted","Data":"8b8c85135d73ece5eacb5014d73398faed848db516396dd7ced4ed214e595587"} Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.693974 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9pdgk" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.720751 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fl82f" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.724934 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xqhnv" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.725230 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" event={"ID":"1c4b3948-0466-411a-8180-5755301bd715","Type":"ContainerStarted","Data":"221711c351dd25c4cce174dd29b520562698b6af14003292ee0cc2860b2533e1"} Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.772463 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:53 crc kubenswrapper[4980]: E0107 03:34:53.773734 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:54.272785344 +0000 UTC m=+140.838480079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.809659 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws" Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.852017 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" event={"ID":"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3","Type":"ContainerStarted","Data":"b0db7f5cae4316ecea58bcb27c8160d9aa274fca6da06045ce9043e01f31ecd1"} Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.852054 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xtj2h"] Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.852072 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-46blp" event={"ID":"063cfd7b-7d93-45bc-a374-99b5e204b200","Type":"ContainerStarted","Data":"1618668fa804b7685f517c88d2b7f869a363a72299bb206f8e8958310471264a"} Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.873704 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:53 crc kubenswrapper[4980]: E0107 03:34:53.874182 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:54.374165404 +0000 UTC m=+140.939860139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:53 crc kubenswrapper[4980]: I0107 03:34:53.974391 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:53 crc kubenswrapper[4980]: E0107 03:34:53.974743 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:54.47472881 +0000 UTC m=+141.040423545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.071236 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9kdrd"] Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.075305 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:54 crc kubenswrapper[4980]: E0107 03:34:54.075642 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:54.575631095 +0000 UTC m=+141.141325830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.177002 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:54 crc kubenswrapper[4980]: E0107 03:34:54.177895 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:54.677876593 +0000 UTC m=+141.243571328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:54 crc kubenswrapper[4980]: W0107 03:34:54.197184 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f6faf11_614c_4e7e_99c0_7a4f7b62f523.slice/crio-f318130e024630b663cad7f811585350841ccc7ffd3f22d2b9d894aff92d1252 WatchSource:0}: Error finding container f318130e024630b663cad7f811585350841ccc7ffd3f22d2b9d894aff92d1252: Status 404 returned error can't find the container with id f318130e024630b663cad7f811585350841ccc7ffd3f22d2b9d894aff92d1252 Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.243630 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr"] Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.261612 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hc49h"] Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.279675 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:54 crc kubenswrapper[4980]: E0107 03:34:54.279973 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:54.779963434 +0000 UTC m=+141.345658169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.305136 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr"] Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.315123 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6gsc5"] Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.381978 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:54 crc kubenswrapper[4980]: E0107 03:34:54.382431 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:54.882415978 +0000 UTC m=+141.448110713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.420039 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hn52l"] Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.458306 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr"] Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.481992 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-d52pc"] Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.486670 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.500466 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fl82f"] Jan 07 03:34:54 crc kubenswrapper[4980]: E0107 03:34:54.523236 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:55.013181616 +0000 UTC m=+141.578876351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.531095 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd"] Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.551619 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv"] Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.590883 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:54 crc kubenswrapper[4980]: E0107 03:34:54.591244 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:55.091225153 +0000 UTC m=+141.656919888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.648410 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gcqrg"] Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.691677 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:54 crc kubenswrapper[4980]: E0107 03:34:54.692103 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:55.192086578 +0000 UTC m=+141.757781313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:54 crc kubenswrapper[4980]: W0107 03:34:54.713959 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a5331ee_46c7_4826_85cf_3c57f25f1d6c.slice/crio-fc755400a2cb9698242489bb7962e204ba9590138602ece38e267482d5016c94 WatchSource:0}: Error finding container fc755400a2cb9698242489bb7962e204ba9590138602ece38e267482d5016c94: Status 404 returned error can't find the container with id fc755400a2cb9698242489bb7962e204ba9590138602ece38e267482d5016c94 Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.794317 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:54 crc kubenswrapper[4980]: E0107 03:34:54.794846 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:55.294829799 +0000 UTC m=+141.860524534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.848327 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4q4f7" event={"ID":"1a1496bf-7dfd-4cc1-867b-6733c9d71779","Type":"ContainerStarted","Data":"8d4bebf9af1e3a0c3ff5aa154ad792eef8d058da6bbd9ad63be148efa0faf827"} Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.852466 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdszr" event={"ID":"2ec52daf-01b4-44ed-9ccb-b6fa03c80ef8","Type":"ContainerStarted","Data":"ec9803e9c1c20a0429da83a32a7e3e99ccb8c7b9af0ce82a0e187386562c77ec"} Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.871287 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6wjp7"] Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.878127 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xtj2h" event={"ID":"7e4e8bcd-d566-43ed-ba1d-e5c367faca7d","Type":"ContainerStarted","Data":"a72a95728259208a5b8f673c1b657eadd7b7bee0ed5b1754ee7b2e23ca5e531e"} Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.883383 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hn52l" event={"ID":"aa545c14-ac24-490a-a488-9d26b26e6ea2","Type":"ContainerStarted","Data":"51b0553d20003987f3622bc2764665b8df3e867dc112ca2c9015ac996e20f7ac"} Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.884505 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sjzgd" event={"ID":"b7a65dcf-9933-4d66-92c4-e1c9d9e209e9","Type":"ContainerStarted","Data":"cfc2dd010d6f8f538ae0671bb98c489f3f8bbc4d7a0b37ed445d673970a51b7a"} Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.885978 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9pdgk" event={"ID":"9f6faf11-614c-4e7e-99c0-7a4f7b62f523","Type":"ContainerStarted","Data":"f318130e024630b663cad7f811585350841ccc7ffd3f22d2b9d894aff92d1252"} Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.894462 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frq5q" event={"ID":"9cd022ff-cd8d-4e6d-8325-491d91146a99","Type":"ContainerStarted","Data":"4e962fb5e98ed835b2fd18f107be274fcd25616d0a75fb5266194bb7e28cd5ba"} Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.896841 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:54 crc kubenswrapper[4980]: E0107 03:34:54.897143 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:55.397131588 +0000 UTC m=+141.962826323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.900344 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6nqsq"] Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.911238 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" event={"ID":"5c60a3ab-4428-4658-9d0b-5ed1608bd379","Type":"ContainerStarted","Data":"de39d6d0a50cb0c33278b98318480915e2eaac1766c971517c38e8da8fe84fd5"} Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.913329 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9kdrd" event={"ID":"ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84","Type":"ContainerStarted","Data":"bd88e7d82d0e8f6c527e1496b1f6813bbf30832f95442b0559ffeed3693006d0"} Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.914540 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv" event={"ID":"7bc96f0c-6550-4a0f-9354-b1c6ae591b75","Type":"ContainerStarted","Data":"a5e4775c441a6b1764c52f0214f3ef9d89a1b0bbbc581e3f53ebe14976831fa0"} Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.918011 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" event={"ID":"5948f721-3c3a-4f73-90f1-cb7a5d101df1","Type":"ContainerStarted","Data":"6aeb9e5567aa8840b2e75157e8821da3a15d67eaceea4f0f06cbb72da8a4ccfb"} Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.921203 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.946651 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fn7tq" event={"ID":"2a6e4d95-beda-46e5-8030-8f4f590cc22e","Type":"ContainerStarted","Data":"12cadbd88b521632411b4563ede9414b0fbbddb84018d28f599c42ec41341d1f"} Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.962219 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn" event={"ID":"e765fa33-9019-4903-9dc2-5ec87e89c0fe","Type":"ContainerStarted","Data":"c13ef69e4a57f21ae84afd143ea6815de43a602a7e73e2af2ee2876e87824f53"} Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.971848 4980 generic.go:334] "Generic (PLEG): container finished" podID="74d1f814-c65c-4ef3-91a2-911e9f23d634" containerID="6a409d4eae5c77d90ca2ae40f52bc8c19b396a7914a8c0c311844c74f0be4be9" exitCode=0 Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.971951 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" event={"ID":"74d1f814-c65c-4ef3-91a2-911e9f23d634","Type":"ContainerDied","Data":"6a409d4eae5c77d90ca2ae40f52bc8c19b396a7914a8c0c311844c74f0be4be9"} Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.994529 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" event={"ID":"1c4b3948-0466-411a-8180-5755301bd715","Type":"ContainerStarted","Data":"04577aa8f65d7f0304563169d659cd4eb8727a8424912d769af2202c88c92648"} Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.998047 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" event={"ID":"aa4b53dc-f81d-4129-bcdb-75ff5a7b27a3","Type":"ContainerStarted","Data":"bf98aa61ee26d607cf037bf5745888d2feaf1c2ddbb6b867c8aa179c755da982"} Jan 07 03:34:54 crc kubenswrapper[4980]: I0107 03:34:54.998349 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:54 crc kubenswrapper[4980]: E0107 03:34:54.999152 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:55.499137277 +0000 UTC m=+142.064832012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.077438 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.083472 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" event={"ID":"7a5331ee-46c7-4826-85cf-3c57f25f1d6c","Type":"ContainerStarted","Data":"fc755400a2cb9698242489bb7962e204ba9590138602ece38e267482d5016c94"} Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.113319 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:55 crc kubenswrapper[4980]: E0107 03:34:55.118795 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:55.618762566 +0000 UTC m=+142.184457301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.128445 4980 generic.go:334] "Generic (PLEG): container finished" podID="2d9aa4e6-3178-46b8-bb65-ca339c36cef3" containerID="402e61d6c48115edcc10c003969b3bc6a7346b17ecd3d074f8dbb37152bbce88" exitCode=0 Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.128673 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" event={"ID":"2d9aa4e6-3178-46b8-bb65-ca339c36cef3","Type":"ContainerDied","Data":"402e61d6c48115edcc10c003969b3bc6a7346b17ecd3d074f8dbb37152bbce88"} Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.188561 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.206783 4980 patch_prober.go:28] interesting pod/router-default-5444994796-fn7tq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 03:34:55 crc kubenswrapper[4980]: [-]has-synced failed: reason withheld Jan 07 03:34:55 crc kubenswrapper[4980]: [+]process-running ok Jan 07 03:34:55 crc kubenswrapper[4980]: healthz check failed Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.206841 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fn7tq" podUID="2a6e4d95-beda-46e5-8030-8f4f590cc22e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.216912 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:55 crc kubenswrapper[4980]: E0107 03:34:55.217658 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:55.717644599 +0000 UTC m=+142.283339334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.236813 4980 generic.go:334] "Generic (PLEG): container finished" podID="39cd077a-5ec6-43f5-b541-f50be415eca7" containerID="491f200c13f40ff4442283257eb2982ca0e324483779874f13d7d3b8a636ef2b" exitCode=0 Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.236909 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb" event={"ID":"39cd077a-5ec6-43f5-b541-f50be415eca7","Type":"ContainerDied","Data":"491f200c13f40ff4442283257eb2982ca0e324483779874f13d7d3b8a636ef2b"} Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.250253 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qrmll" event={"ID":"9ccaed08-3b4a-4364-a5a3-4c2d456e9358","Type":"ContainerStarted","Data":"5a0458a2c70fc8e2e997f7fe238bcc8db89fb3a7cc011d934e0e2277ff5349f3"} Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.253195 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vtrzt"] Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.288110 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" event={"ID":"d08997fa-78f4-4c3c-a8f4-86ba967a4f35","Type":"ContainerStarted","Data":"8a2cb4e6435170533bef663945ad6308eeef99f1a04460a38e01bce54a6365f5"} Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.288987 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.311341 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bqgbx"] Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.319651 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:55 crc kubenswrapper[4980]: E0107 03:34:55.321023 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:55.8210069 +0000 UTC m=+142.386701635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.369124 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-x9kxk" event={"ID":"c14c8560-62e3-43b1-9916-f3bbbb9f3fd9","Type":"ContainerStarted","Data":"88dfff50acb00db968b7cd48f9df544878d10fe2af7616cb4fe5aee5c0ed0fa7"} Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.370263 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-x9kxk" Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.373106 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gx94v"] Jan 07 03:34:55 crc kubenswrapper[4980]: W0107 03:34:55.398718 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87de8454_433f_4bf0_adb3_315352ae6312.slice/crio-d1d90bf50a2025777bbc87b167581ccd5927a12825245eeb13f26a2afcc5bcfa WatchSource:0}: Error finding container d1d90bf50a2025777bbc87b167581ccd5927a12825245eeb13f26a2afcc5bcfa: Status 404 returned error can't find the container with id d1d90bf50a2025777bbc87b167581ccd5927a12825245eeb13f26a2afcc5bcfa Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.401129 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" podStartSLOduration=121.401116381 podStartE2EDuration="2m1.401116381s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:55.399248223 +0000 UTC m=+141.964942958" watchObservedRunningTime="2026-01-07 03:34:55.401116381 +0000 UTC m=+141.966811116" Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.402778 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr" event={"ID":"c52d9fdd-cc98-4b50-a85f-6d206363fdb3","Type":"ContainerStarted","Data":"6a7c77ef6ad781a2eeaa42a2e4d0013dbfa1f1eade7cf44f56234a2396f33d06"} Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.420504 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.423958 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-znb5v"] Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.424094 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6gsc5" event={"ID":"d67a9936-fafd-4a76-aac2-6209c4697007","Type":"ContainerStarted","Data":"b1edc2448155bc141b8fe1dc4fe2702d6f115d4ade6db9d93d5a5c76545fd582"} Jan 07 03:34:55 crc kubenswrapper[4980]: E0107 03:34:55.426077 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:55.924536726 +0000 UTC m=+142.490231461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.476394 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" event={"ID":"3f49713b-0fe7-4b75-aefe-c44cf397b444","Type":"ContainerStarted","Data":"f9e55d6fc8c74ec0333f62b29a50c5fb5af548856c27f3e1ac31d28553ee2eee"} Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.495347 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-x9kxk" Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.518427 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-fn7tq" podStartSLOduration=121.518412978 podStartE2EDuration="2m1.518412978s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:55.453080269 +0000 UTC m=+142.018775004" watchObservedRunningTime="2026-01-07 03:34:55.518412978 +0000 UTC m=+142.084107713" Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.526270 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws"] Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.526341 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-d52pc" event={"ID":"e329501d-6d85-4f84-b9a2-3f29a0f41881","Type":"ContainerStarted","Data":"779c9a6180b9e8ff9f6a97b75cddc9c4c9f0bce928827e11b775a4bd679ffb5f"} Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.527702 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:55 crc kubenswrapper[4980]: E0107 03:34:55.533146 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:56.033131488 +0000 UTC m=+142.598826223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.557014 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-46blp" event={"ID":"063cfd7b-7d93-45bc-a374-99b5e204b200","Type":"ContainerStarted","Data":"bb4bdec15f5c99f4479399e58d76d103711321dcddae96dceb57b2971510e530"} Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.564666 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-mfrps" podStartSLOduration=121.56461951 podStartE2EDuration="2m1.56461951s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:55.542026169 +0000 UTC m=+142.107720904" watchObservedRunningTime="2026-01-07 03:34:55.56461951 +0000 UTC m=+142.130314245" Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.593129 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" podStartSLOduration=121.593108852 podStartE2EDuration="2m1.593108852s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:55.590832092 +0000 UTC m=+142.156526817" watchObservedRunningTime="2026-01-07 03:34:55.593108852 +0000 UTC m=+142.158803587" Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.595235 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz" Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.636036 4980 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-2crgz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.636106 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz" podUID="aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.637594 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:55 crc kubenswrapper[4980]: E0107 03:34:55.637945 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:56.137929352 +0000 UTC m=+142.703624077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.649265 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd" event={"ID":"e6badf96-ed64-4177-9bb3-9cc00ac5ce09","Type":"ContainerStarted","Data":"f401fa92b7e914785cb64de50c96fb8a34674d5c64e27be572b95b2fd8fdac2e"} Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.708832 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdszr" podStartSLOduration=122.708816201 podStartE2EDuration="2m2.708816201s" podCreationTimestamp="2026-01-07 03:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:55.708168961 +0000 UTC m=+142.273863696" watchObservedRunningTime="2026-01-07 03:34:55.708816201 +0000 UTC m=+142.274510936" Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.740040 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:55 crc kubenswrapper[4980]: E0107 03:34:55.759137 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:56.259121709 +0000 UTC m=+142.824816444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.784540 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj"] Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.849409 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:55 crc kubenswrapper[4980]: E0107 03:34:55.849736 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:56.349722069 +0000 UTC m=+142.915416804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.905664 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-t7kvf" podStartSLOduration=121.9056484 podStartE2EDuration="2m1.9056484s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:55.848586415 +0000 UTC m=+142.414281150" watchObservedRunningTime="2026-01-07 03:34:55.9056484 +0000 UTC m=+142.471343135" Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.905850 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79w2t"] Jan 07 03:34:55 crc kubenswrapper[4980]: I0107 03:34:55.957310 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:55 crc kubenswrapper[4980]: E0107 03:34:55.957761 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:56.457748293 +0000 UTC m=+143.023443028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.019148 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xqhnv"] Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.061267 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:56 crc kubenswrapper[4980]: E0107 03:34:56.062174 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:56.562158006 +0000 UTC m=+143.127852741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:56 crc kubenswrapper[4980]: W0107 03:34:56.075160 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b31d370_2ed6_406a_b92f_a6c74386d4c1.slice/crio-69c7029caf496b7468a2d3f0c701d8b917717fb48e6d6884fec476ac83db0700 WatchSource:0}: Error finding container 69c7029caf496b7468a2d3f0c701d8b917717fb48e6d6884fec476ac83db0700: Status 404 returned error can't find the container with id 69c7029caf496b7468a2d3f0c701d8b917717fb48e6d6884fec476ac83db0700 Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.097832 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz" podStartSLOduration=122.097817127 podStartE2EDuration="2m2.097817127s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:56.0518422 +0000 UTC m=+142.617536935" watchObservedRunningTime="2026-01-07 03:34:56.097817127 +0000 UTC m=+142.663511862" Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.098759 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-46blp" podStartSLOduration=123.098754316 podStartE2EDuration="2m3.098754316s" podCreationTimestamp="2026-01-07 03:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:56.097058253 +0000 UTC m=+142.662752988" watchObservedRunningTime="2026-01-07 03:34:56.098754316 +0000 UTC m=+142.664449051" Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.136512 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-x9kxk" podStartSLOduration=123.136493619 podStartE2EDuration="2m3.136493619s" podCreationTimestamp="2026-01-07 03:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:56.136006204 +0000 UTC m=+142.701700939" watchObservedRunningTime="2026-01-07 03:34:56.136493619 +0000 UTC m=+142.702188354" Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.139321 4980 patch_prober.go:28] interesting pod/router-default-5444994796-fn7tq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 03:34:56 crc kubenswrapper[4980]: [-]has-synced failed: reason withheld Jan 07 03:34:56 crc kubenswrapper[4980]: [+]process-running ok Jan 07 03:34:56 crc kubenswrapper[4980]: healthz check failed Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.139377 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fn7tq" podUID="2a6e4d95-beda-46e5-8030-8f4f590cc22e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.175457 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:56 crc kubenswrapper[4980]: E0107 03:34:56.175908 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:56.675894334 +0000 UTC m=+143.241589069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.225009 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.278100 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:56 crc kubenswrapper[4980]: E0107 03:34:56.278805 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:56.778789651 +0000 UTC m=+143.344484386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.390072 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:56 crc kubenswrapper[4980]: E0107 03:34:56.390447 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:56.890420784 +0000 UTC m=+143.456115529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.492263 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:56 crc kubenswrapper[4980]: E0107 03:34:56.493033 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:56.993008832 +0000 UTC m=+143.558703567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.594338 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:56 crc kubenswrapper[4980]: E0107 03:34:56.594670 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:57.09465886 +0000 UTC m=+143.660353595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.697593 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:56 crc kubenswrapper[4980]: E0107 03:34:56.697910 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:57.197896208 +0000 UTC m=+143.763590933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.756311 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9kdrd" event={"ID":"ec6e6ff0-3a42-47a7-bf79-7ca713cfaf84","Type":"ContainerStarted","Data":"e835fd3a87939bbf7e50aeb4770211190bada41b159f412fd40a5760207abff3"} Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.773741 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" event={"ID":"7a9327ea-0e49-46d3-a849-bef3feed4a78","Type":"ContainerStarted","Data":"1934c386f956929fbbe841b96df5e1a9fbaa70092e6985425dac20e7ed449888"} Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.782698 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gx94v" event={"ID":"87de8454-433f-4bf0-adb3-315352ae6312","Type":"ContainerStarted","Data":"27a424617369f1d897c809e2bbd8894424ad6add3c6da1fec1acf29eb6ceb6cf"} Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.782737 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gx94v" event={"ID":"87de8454-433f-4bf0-adb3-315352ae6312","Type":"ContainerStarted","Data":"d1d90bf50a2025777bbc87b167581ccd5927a12825245eeb13f26a2afcc5bcfa"} Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.798461 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9kdrd" podStartSLOduration=122.798445113 podStartE2EDuration="2m2.798445113s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:56.796840793 +0000 UTC m=+143.362535528" watchObservedRunningTime="2026-01-07 03:34:56.798445113 +0000 UTC m=+143.364139848" Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.800773 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:56 crc kubenswrapper[4980]: E0107 03:34:56.801908 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:57.301893918 +0000 UTC m=+143.867588653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.825165 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr" event={"ID":"c52d9fdd-cc98-4b50-a85f-6d206363fdb3","Type":"ContainerStarted","Data":"4aaf25182c02fc5d3814cd6d8c5ea7015e56c1f4892f5f7ce1f0eb6566e9dd3b"} Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.854357 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4q4f7" event={"ID":"1a1496bf-7dfd-4cc1-867b-6733c9d71779","Type":"ContainerStarted","Data":"3f7bc67b3f2cd0034f39650fb2d20e3c92733679726ab42704b79fbd3725965d"} Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.895586 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hn52l" event={"ID":"aa545c14-ac24-490a-a488-9d26b26e6ea2","Type":"ContainerStarted","Data":"2ca07791ab3875df26e71552fc94f9b2a5b299326a7ba628b017a80a93b7a128"} Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.896095 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hn52l" Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.896889 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4q4f7" podStartSLOduration=122.896878022 podStartE2EDuration="2m2.896878022s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:56.895528962 +0000 UTC m=+143.461223697" watchObservedRunningTime="2026-01-07 03:34:56.896878022 +0000 UTC m=+143.462572747" Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.900028 4980 patch_prober.go:28] interesting pod/downloads-7954f5f757-hn52l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.900064 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hn52l" podUID="aa545c14-ac24-490a-a488-9d26b26e6ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 07 03:34:56 crc kubenswrapper[4980]: I0107 03:34:56.905696 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:56 crc kubenswrapper[4980]: E0107 03:34:56.908846 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:57.408822208 +0000 UTC m=+143.974516933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.007140 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:57 crc kubenswrapper[4980]: E0107 03:34:57.007967 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:57.50795071 +0000 UTC m=+144.073645435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.069686 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" event={"ID":"5c60a3ab-4428-4658-9d0b-5ed1608bd379","Type":"ContainerStarted","Data":"f2eab25c8a3929a296531e2fc050430c5a19455278bb97e864c6f82d4937d756"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.070661 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.074877 4980 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hc49h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.074928 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" podUID="5c60a3ab-4428-4658-9d0b-5ed1608bd379" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.082582 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xqhnv" event={"ID":"1b31d370-2ed6-406a-b92f-a6c74386d4c1","Type":"ContainerStarted","Data":"69c7029caf496b7468a2d3f0c701d8b917717fb48e6d6884fec476ac83db0700"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.097002 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gcqrg" event={"ID":"ee35d058-0536-49f4-a31f-d19f858f2a37","Type":"ContainerStarted","Data":"507f31eab276b2e174de6ed77941912129fccf4366716db940ef93985bd98894"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.097045 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gcqrg" event={"ID":"ee35d058-0536-49f4-a31f-d19f858f2a37","Type":"ContainerStarted","Data":"0fd82970366922f36186d6c8effbe5312f3b6615b71690370c9c6ddebdc20960"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.108089 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:57 crc kubenswrapper[4980]: E0107 03:34:57.109139 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:57.609124554 +0000 UTC m=+144.174819289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.123516 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-hn52l" podStartSLOduration=124.123497333 podStartE2EDuration="2m4.123497333s" podCreationTimestamp="2026-01-07 03:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:56.948543223 +0000 UTC m=+143.514237958" watchObservedRunningTime="2026-01-07 03:34:57.123497333 +0000 UTC m=+143.689192068" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.124063 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" podStartSLOduration=123.12405814 podStartE2EDuration="2m3.12405814s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:57.120921434 +0000 UTC m=+143.686616169" watchObservedRunningTime="2026-01-07 03:34:57.12405814 +0000 UTC m=+143.689752875" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.143627 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qrmll" event={"ID":"9ccaed08-3b4a-4364-a5a3-4c2d456e9358","Type":"ContainerStarted","Data":"7c12cddd08e7e42f671972cdaa576383ce82faca56ffb47bb0728c797b766510"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.148598 4980 patch_prober.go:28] interesting pod/router-default-5444994796-fn7tq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 03:34:57 crc kubenswrapper[4980]: [-]has-synced failed: reason withheld Jan 07 03:34:57 crc kubenswrapper[4980]: [+]process-running ok Jan 07 03:34:57 crc kubenswrapper[4980]: healthz check failed Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.148648 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fn7tq" podUID="2a6e4d95-beda-46e5-8030-8f4f590cc22e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.177876 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frq5q" event={"ID":"9cd022ff-cd8d-4e6d-8325-491d91146a99","Type":"ContainerStarted","Data":"1536c695d420e565d25983b56ce8f1f95ceca7fe4e7e979e76df526f8775c6ab"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.187532 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-gcqrg" podStartSLOduration=123.187517291 podStartE2EDuration="2m3.187517291s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:57.18682976 +0000 UTC m=+143.752524495" watchObservedRunningTime="2026-01-07 03:34:57.187517291 +0000 UTC m=+143.753212026" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.211389 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:57 crc kubenswrapper[4980]: E0107 03:34:57.213475 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:57.713459564 +0000 UTC m=+144.279154299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.228053 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xtj2h" event={"ID":"7e4e8bcd-d566-43ed-ba1d-e5c367faca7d","Type":"ContainerStarted","Data":"42d46f582dd855530222fbf5c4440732ad3319d111aaef6446f3f5383aced47a"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.232782 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frq5q" podStartSLOduration=124.232770404 podStartE2EDuration="2m4.232770404s" podCreationTimestamp="2026-01-07 03:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:57.230072732 +0000 UTC m=+143.795767477" watchObservedRunningTime="2026-01-07 03:34:57.232770404 +0000 UTC m=+143.798465139" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.269484 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" event={"ID":"3f49713b-0fe7-4b75-aefe-c44cf397b444","Type":"ContainerStarted","Data":"c13bc6844710ace6f50f73f1348ac19a5996f11892cf2111c64ac90639b67aeb"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.271044 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.317807 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:57 crc kubenswrapper[4980]: E0107 03:34:57.319887 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:57.819862608 +0000 UTC m=+144.385557343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.324654 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-qrmll" podStartSLOduration=123.324620084 podStartE2EDuration="2m3.324620084s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:57.278870395 +0000 UTC m=+143.844565120" watchObservedRunningTime="2026-01-07 03:34:57.324620084 +0000 UTC m=+143.890314819" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.375410 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fl82f" event={"ID":"735bc30f-f25f-4a6f-a9f5-f22a71c5344b","Type":"ContainerStarted","Data":"83665a442ed2483889f8d1d63cc736d88ae0abbbc2768fe175d1158c131d1ae5"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.375609 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fl82f" event={"ID":"735bc30f-f25f-4a6f-a9f5-f22a71c5344b","Type":"ContainerStarted","Data":"e6c841a912674ddeb0296cbb2237523639b6651187e89383785ddbab79a10ac1"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.420798 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xtj2h" podStartSLOduration=123.420782804 podStartE2EDuration="2m3.420782804s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:57.353694563 +0000 UTC m=+143.919389298" watchObservedRunningTime="2026-01-07 03:34:57.420782804 +0000 UTC m=+143.986477539" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.422265 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:57 crc kubenswrapper[4980]: E0107 03:34:57.428243 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:57.928217222 +0000 UTC m=+144.493911957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.480365 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sjzgd" event={"ID":"b7a65dcf-9933-4d66-92c4-e1c9d9e209e9","Type":"ContainerStarted","Data":"cc628f3ae0093435e9d1930494c23f89f7fe0df4a900d9ee7d448256a4491914"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.498100 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" podStartSLOduration=123.498084929 podStartE2EDuration="2m3.498084929s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:57.423034673 +0000 UTC m=+143.988729408" watchObservedRunningTime="2026-01-07 03:34:57.498084929 +0000 UTC m=+144.063779664" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.504388 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" event={"ID":"cce4a570-94cd-4548-9682-7d69c980686a","Type":"ContainerStarted","Data":"81070b025e6e588688ad09a2303e277d1ae559c4bc9244d7aa1b0c5360c44b17"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.524122 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:57 crc kubenswrapper[4980]: E0107 03:34:57.525236 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:58.025214558 +0000 UTC m=+144.590909293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.537141 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6wjp7" event={"ID":"727de252-9019-46b7-8b54-25d3c80d5437","Type":"ContainerStarted","Data":"55e733d32d9682c7e1e205b083ce520476f2f907f18311edc38c7e53a120bdd0"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.569837 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bqgbx" event={"ID":"307b87f4-a717-4813-9ef9-4d44ca9e33f5","Type":"ContainerStarted","Data":"4e80ed2c4a3af6ef14e24009c2b7b33c887cf5c30a822371c1961e0da6fae1ba"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.597089 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-fl82f" podStartSLOduration=7.597070345 podStartE2EDuration="7.597070345s" podCreationTimestamp="2026-01-07 03:34:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:57.499050478 +0000 UTC m=+144.064745213" watchObservedRunningTime="2026-01-07 03:34:57.597070345 +0000 UTC m=+144.162765080" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.597992 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-d52pc" event={"ID":"e329501d-6d85-4f84-b9a2-3f29a0f41881","Type":"ContainerStarted","Data":"8bd7944c6a77dde0c5703e37612d4cc87f1043077efa9f541180b6507953d536"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.621813 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws" event={"ID":"e80a9760-331b-4b67-b194-b397e7a692c0","Type":"ContainerStarted","Data":"e320739c61ee541f0641c47c968f69d4ef47ed15e2f4f424eccdb792e45c9cfa"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.627543 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:57 crc kubenswrapper[4980]: E0107 03:34:57.627877 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:58.127865967 +0000 UTC m=+144.693560702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.644967 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6wjp7" podStartSLOduration=123.6449487 podStartE2EDuration="2m3.6449487s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:57.628470696 +0000 UTC m=+144.194165421" watchObservedRunningTime="2026-01-07 03:34:57.6449487 +0000 UTC m=+144.210643435" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.645248 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" podStartSLOduration=123.645242319 podStartE2EDuration="2m3.645242319s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:57.598376606 +0000 UTC m=+144.164071341" watchObservedRunningTime="2026-01-07 03:34:57.645242319 +0000 UTC m=+144.210937054" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.667400 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj" event={"ID":"7ddb847a-2850-4c75-ad28-5cf2cff5ccf8","Type":"ContainerStarted","Data":"0d7a80650f58f4a01ad470886bfc1ffc8ce47486c58a74c928e846746ac4da2d"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.676755 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.678880 4980 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4rzfj container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.678923 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj" podUID="7ddb847a-2850-4c75-ad28-5cf2cff5ccf8" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.706893 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9pdgk" event={"ID":"9f6faf11-614c-4e7e-99c0-7a4f7b62f523","Type":"ContainerStarted","Data":"75a93cd41b954bb936c978f3ecddd66b9eb4048ae5e5070a1ca2f0023cee4409"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.715490 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj" podStartSLOduration=123.715480437 podStartE2EDuration="2m3.715480437s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:57.71495295 +0000 UTC m=+144.280647675" watchObservedRunningTime="2026-01-07 03:34:57.715480437 +0000 UTC m=+144.281175172" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.717736 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-sjzgd" podStartSLOduration=123.717728345 podStartE2EDuration="2m3.717728345s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:57.668051826 +0000 UTC m=+144.233746561" watchObservedRunningTime="2026-01-07 03:34:57.717728345 +0000 UTC m=+144.283423080" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.734392 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:57 crc kubenswrapper[4980]: E0107 03:34:57.735382 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:58.235366855 +0000 UTC m=+144.801061580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.762726 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9pdgk" podStartSLOduration=8.762711771 podStartE2EDuration="8.762711771s" podCreationTimestamp="2026-01-07 03:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:57.760597157 +0000 UTC m=+144.326291892" watchObservedRunningTime="2026-01-07 03:34:57.762711771 +0000 UTC m=+144.328406506" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.770279 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn" event={"ID":"e765fa33-9019-4903-9dc2-5ec87e89c0fe","Type":"ContainerStarted","Data":"ef2132d0df1b466304de3cfd3553ddd6a13b979352b334d02d99b44469bc087d"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.787243 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd" event={"ID":"e6badf96-ed64-4177-9bb3-9cc00ac5ce09","Type":"ContainerStarted","Data":"c4ecb19601e41756660325445eb2df493bfca1e6c2f1ac21bf94490e0cbd2d06"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.789149 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz" event={"ID":"aafc0de9-3c62-47fa-a2b1-4e6fdfc6806e","Type":"ContainerStarted","Data":"08395a7fd38207f79e1c62b65f38ee5eb5de2b0b0483bcae9ee6ebcfaf8af285"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.802280 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79w2t" event={"ID":"aba93425-6143-4f96-aff6-178f1f1fb3ac","Type":"ContainerStarted","Data":"7ed3c56a2b0c39ae3e6aeb81017cf3589c86f4c780c63e270587d65c310d3e6c"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.817616 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2crgz" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.817894 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6gsc5" event={"ID":"d67a9936-fafd-4a76-aac2-6209c4697007","Type":"ContainerStarted","Data":"00a830ac374ba29e1542a954e17b2ada7003f8ba034daf5ea1c46bfc1fd8dc2b"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.841985 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:57 crc kubenswrapper[4980]: E0107 03:34:57.843687 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:58.343671967 +0000 UTC m=+144.909366702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.846669 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd" podStartSLOduration=123.846654718 podStartE2EDuration="2m3.846654718s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:57.845605417 +0000 UTC m=+144.411300152" watchObservedRunningTime="2026-01-07 03:34:57.846654718 +0000 UTC m=+144.412349453" Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.850709 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv" event={"ID":"7bc96f0c-6550-4a0f-9354-b1c6ae591b75","Type":"ContainerStarted","Data":"866b0b297c2e2aa3d16f878c0320e909b7821e9951c153c793e9036c014328bc"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.881773 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-znb5v" event={"ID":"4ad664a2-98ab-4ceb-9405-377d259db0f3","Type":"ContainerStarted","Data":"c2de91ab2d0dd229e3e29694504b59022d6723b2840283f7f15cf1b0ae4f9909"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.933872 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" event={"ID":"7a5331ee-46c7-4826-85cf-3c57f25f1d6c","Type":"ContainerStarted","Data":"21b2b1e6fd1c40ca83a8857e70933c9a9dea5971e8af4944cd7932c451885613"} Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.950013 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:57 crc kubenswrapper[4980]: E0107 03:34:57.950211 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:58.450191634 +0000 UTC m=+145.015886369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.950280 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:57 crc kubenswrapper[4980]: E0107 03:34:57.952133 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:58.452126144 +0000 UTC m=+145.017820879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:57 crc kubenswrapper[4980]: I0107 03:34:57.960670 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6gsc5" podStartSLOduration=123.960654794 podStartE2EDuration="2m3.960654794s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:57.957978462 +0000 UTC m=+144.523673197" watchObservedRunningTime="2026-01-07 03:34:57.960654794 +0000 UTC m=+144.526349529" Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.001893 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79w2t" podStartSLOduration=124.001876235 podStartE2EDuration="2m4.001876235s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:57.999836612 +0000 UTC m=+144.565531347" watchObservedRunningTime="2026-01-07 03:34:58.001876235 +0000 UTC m=+144.567570980" Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.041101 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" podStartSLOduration=124.041084664 podStartE2EDuration="2m4.041084664s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:58.038896098 +0000 UTC m=+144.604590833" watchObservedRunningTime="2026-01-07 03:34:58.041084664 +0000 UTC m=+144.606779399" Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.053000 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:58 crc kubenswrapper[4980]: E0107 03:34:58.053111 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:58.553086341 +0000 UTC m=+145.118781076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.053224 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:58 crc kubenswrapper[4980]: E0107 03:34:58.054268 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:58.554260937 +0000 UTC m=+145.119955672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.141920 4980 patch_prober.go:28] interesting pod/router-default-5444994796-fn7tq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 03:34:58 crc kubenswrapper[4980]: [-]has-synced failed: reason withheld Jan 07 03:34:58 crc kubenswrapper[4980]: [+]process-running ok Jan 07 03:34:58 crc kubenswrapper[4980]: healthz check failed Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.147015 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fn7tq" podUID="2a6e4d95-beda-46e5-8030-8f4f590cc22e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.158994 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:58 crc kubenswrapper[4980]: E0107 03:34:58.159519 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:58.659499495 +0000 UTC m=+145.225194230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.185801 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-znb5v" podStartSLOduration=124.185764119 podStartE2EDuration="2m4.185764119s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:58.116990576 +0000 UTC m=+144.682685301" watchObservedRunningTime="2026-01-07 03:34:58.185764119 +0000 UTC m=+144.751458854" Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.230634 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwcbv" podStartSLOduration=124.21853578 podStartE2EDuration="2m4.21853578s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:58.146408855 +0000 UTC m=+144.712103590" watchObservedRunningTime="2026-01-07 03:34:58.21853578 +0000 UTC m=+144.784230515" Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.242144 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pc8qr" Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.269704 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:58 crc kubenswrapper[4980]: E0107 03:34:58.270240 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:58.770222512 +0000 UTC m=+145.335917247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.370800 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:58 crc kubenswrapper[4980]: E0107 03:34:58.371252 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:58.871229861 +0000 UTC m=+145.436924586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.472511 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:58 crc kubenswrapper[4980]: E0107 03:34:58.472960 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:58.97293941 +0000 UTC m=+145.538634145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.574136 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:58 crc kubenswrapper[4980]: E0107 03:34:58.574333 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:59.07429453 +0000 UTC m=+145.639989265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.574451 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:58 crc kubenswrapper[4980]: E0107 03:34:58.574878 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:59.074870208 +0000 UTC m=+145.640564943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.675591 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:58 crc kubenswrapper[4980]: E0107 03:34:58.675852 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:59.175808895 +0000 UTC m=+145.741503630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.675943 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:58 crc kubenswrapper[4980]: E0107 03:34:58.676429 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:59.176412624 +0000 UTC m=+145.742107359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.777403 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:58 crc kubenswrapper[4980]: E0107 03:34:58.777707 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:59.277678621 +0000 UTC m=+145.843373356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.777910 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:58 crc kubenswrapper[4980]: E0107 03:34:58.778272 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:59.278264268 +0000 UTC m=+145.843959003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.878977 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:58 crc kubenswrapper[4980]: E0107 03:34:58.879194 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:59.379168163 +0000 UTC m=+145.944862898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.879571 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:58 crc kubenswrapper[4980]: E0107 03:34:58.879899 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:59.379892416 +0000 UTC m=+145.945587151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.952029 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7vwd" event={"ID":"e6badf96-ed64-4177-9bb3-9cc00ac5ce09","Type":"ContainerStarted","Data":"bed0539a1b67fbec1fd57e82e9353083458543b6eca51c51e095565dd0410d1a"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.953945 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79w2t" event={"ID":"aba93425-6143-4f96-aff6-178f1f1fb3ac","Type":"ContainerStarted","Data":"3a09fa6cf84aabc0655bc99f6a39ded3c90ae9c00144ed1f8fc5e85da6b5fe85"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.956033 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr" event={"ID":"c52d9fdd-cc98-4b50-a85f-6d206363fdb3","Type":"ContainerStarted","Data":"ca7d46808a9c5319285a714c8823a9192b153b59756d392f22483371047d2b7c"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.957622 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bqgbx" event={"ID":"307b87f4-a717-4813-9ef9-4d44ca9e33f5","Type":"ContainerStarted","Data":"82a4a21fe9948d748d703bf4805ae28efdcd50cf0a41ba3ba4779bbd3362c6ac"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.959699 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xqhnv" event={"ID":"1b31d370-2ed6-406a-b92f-a6c74386d4c1","Type":"ContainerStarted","Data":"8463cf926d58d7dfc50f668bb55d1e206cdde1be843329a40ed3c9e4723234f8"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.959724 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xqhnv" event={"ID":"1b31d370-2ed6-406a-b92f-a6c74386d4c1","Type":"ContainerStarted","Data":"35318ea83fbc4408bec6c780458182f3de337e9ef32ba6b83406357c70dc76f0"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.959849 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-xqhnv" Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.962116 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" event={"ID":"74d1f814-c65c-4ef3-91a2-911e9f23d634","Type":"ContainerStarted","Data":"b16533d870ad00b9403b3ef8248f859cc1852c6f6173562e808d0cf1c68e4724"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.964321 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj" event={"ID":"7ddb847a-2850-4c75-ad28-5cf2cff5ccf8","Type":"ContainerStarted","Data":"2a4fdb95354b972cf4672b4bb0bbbcc58321c742a3b1eda77f75d8c9233c7e35"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.967502 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" event={"ID":"2d9aa4e6-3178-46b8-bb65-ca339c36cef3","Type":"ContainerStarted","Data":"8d90830e1dd1550ce61b9b2e78fb9ebe1b1641baf9463c98e70a5c5b07b4f35a"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.967564 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" event={"ID":"2d9aa4e6-3178-46b8-bb65-ca339c36cef3","Type":"ContainerStarted","Data":"04b8fcd842be73606026f19461bcfe5415e8b25a06c7061c28d05636799f7539"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.969567 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn" event={"ID":"e765fa33-9019-4903-9dc2-5ec87e89c0fe","Type":"ContainerStarted","Data":"07ca4dd9a79978ce59fc2ccefe84f1db6a9d2f1492442e828ea53d3f3b11c059"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.970796 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-znb5v" event={"ID":"4ad664a2-98ab-4ceb-9405-377d259db0f3","Type":"ContainerStarted","Data":"d59ffa6f2e2dc3d1606f56c1c463d54059d86f6ac5cac24af8f9a7da0ff550ec"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.972699 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb" event={"ID":"39cd077a-5ec6-43f5-b541-f50be415eca7","Type":"ContainerStarted","Data":"f39d59d7b8db7f7c1e1b0e043563e6c523734520d94250f20460eacdb116147c"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.972818 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb" Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.974224 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6wjp7" event={"ID":"727de252-9019-46b7-8b54-25d3c80d5437","Type":"ContainerStarted","Data":"81a241ac86c002c2a2d94a8fa78039b51500b8984d7e0c2c5147da9986a18736"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.975944 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" event={"ID":"cce4a570-94cd-4548-9682-7d69c980686a","Type":"ContainerStarted","Data":"000fb0e50c21198878fad3a64d8cb5c4b43e191428b36c8a161ff73335e50d59"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.977396 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-d52pc" event={"ID":"e329501d-6d85-4f84-b9a2-3f29a0f41881","Type":"ContainerStarted","Data":"3b1c35266ea482506d6808b24644af07caee0076eb74a19ea6ed01371df458e3"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.980735 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:58 crc kubenswrapper[4980]: E0107 03:34:58.980945 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:59.480907335 +0000 UTC m=+146.046602070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.981507 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.981770 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws" event={"ID":"e80a9760-331b-4b67-b194-b397e7a692c0","Type":"ContainerStarted","Data":"6b4173a66946361fd35026cb8b511bfb3807a0a07836e1c977541dde5a42f43e"} Jan 07 03:34:58 crc kubenswrapper[4980]: E0107 03:34:58.981949 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:59.481933797 +0000 UTC m=+146.047628532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.981953 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws" event={"ID":"e80a9760-331b-4b67-b194-b397e7a692c0","Type":"ContainerStarted","Data":"a88d96051324a3fa9541d93c468492ab951146329dbd4f39340b3367b8294afd"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.984988 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" event={"ID":"7a9327ea-0e49-46d3-a849-bef3feed4a78","Type":"ContainerStarted","Data":"350a41f48157c0cbd60be4fc9756cd31e6c0cff62b6e3f18aa37cfaa520a184f"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.985191 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.988603 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gx94v" event={"ID":"87de8454-433f-4bf0-adb3-315352ae6312","Type":"ContainerStarted","Data":"8c2c09fb503feb73b8d0327329c819aa6bf2c85c5cd89bde96585a4ca9846170"} Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.989409 4980 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hc49h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.989453 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" podUID="5c60a3ab-4428-4658-9d0b-5ed1608bd379" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.989675 4980 patch_prober.go:28] interesting pod/downloads-7954f5f757-hn52l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.989724 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hn52l" podUID="aa545c14-ac24-490a-a488-9d26b26e6ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.989997 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gx94v" Jan 07 03:34:58 crc kubenswrapper[4980]: I0107 03:34:58.998004 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4rzfj" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.010349 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hcrkr" podStartSLOduration=125.010331725 podStartE2EDuration="2m5.010331725s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:59.008328133 +0000 UTC m=+145.574022868" watchObservedRunningTime="2026-01-07 03:34:59.010331725 +0000 UTC m=+145.576026460" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.083038 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:59 crc kubenswrapper[4980]: E0107 03:34:59.083307 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:59.583265865 +0000 UTC m=+146.148960590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.083676 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.086758 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" podStartSLOduration=125.086744662 podStartE2EDuration="2m5.086744662s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:59.076844869 +0000 UTC m=+145.642539604" watchObservedRunningTime="2026-01-07 03:34:59.086744662 +0000 UTC m=+145.652439397" Jan 07 03:34:59 crc kubenswrapper[4980]: E0107 03:34:59.092267 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:59.59224209 +0000 UTC m=+146.157936825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.124354 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bqgbx" podStartSLOduration=125.124326301 podStartE2EDuration="2m5.124326301s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:59.122943158 +0000 UTC m=+145.688637893" watchObservedRunningTime="2026-01-07 03:34:59.124326301 +0000 UTC m=+145.690021036" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.143376 4980 patch_prober.go:28] interesting pod/router-default-5444994796-fn7tq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 03:34:59 crc kubenswrapper[4980]: [-]has-synced failed: reason withheld Jan 07 03:34:59 crc kubenswrapper[4980]: [+]process-running ok Jan 07 03:34:59 crc kubenswrapper[4980]: healthz check failed Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.143424 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fn7tq" podUID="2a6e4d95-beda-46e5-8030-8f4f590cc22e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.194326 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:59 crc kubenswrapper[4980]: E0107 03:34:59.195050 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:59.695015513 +0000 UTC m=+146.260710248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.274371 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xqhnv" podStartSLOduration=9.274355069 podStartE2EDuration="9.274355069s" podCreationTimestamp="2026-01-07 03:34:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:59.180994435 +0000 UTC m=+145.746689170" watchObservedRunningTime="2026-01-07 03:34:59.274355069 +0000 UTC m=+145.840049804" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.275340 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" podStartSLOduration=125.275333199 podStartE2EDuration="2m5.275333199s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:59.262193168 +0000 UTC m=+145.827887903" watchObservedRunningTime="2026-01-07 03:34:59.275333199 +0000 UTC m=+145.841027934" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.306482 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:59 crc kubenswrapper[4980]: E0107 03:34:59.307044 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:34:59.807024629 +0000 UTC m=+146.372719364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.313985 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-99nxn" podStartSLOduration=126.31396195 podStartE2EDuration="2m6.31396195s" podCreationTimestamp="2026-01-07 03:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:59.307701899 +0000 UTC m=+145.873396634" watchObservedRunningTime="2026-01-07 03:34:59.31396195 +0000 UTC m=+145.879656685" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.338896 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q46ws" podStartSLOduration=125.338868392 podStartE2EDuration="2m5.338868392s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:59.336276823 +0000 UTC m=+145.901971578" watchObservedRunningTime="2026-01-07 03:34:59.338868392 +0000 UTC m=+145.904563127" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.409738 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:59 crc kubenswrapper[4980]: E0107 03:34:59.410250 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:34:59.910223944 +0000 UTC m=+146.475918679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.423481 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb" podStartSLOduration=126.423460849 podStartE2EDuration="2m6.423460849s" podCreationTimestamp="2026-01-07 03:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:59.422920712 +0000 UTC m=+145.988615447" watchObservedRunningTime="2026-01-07 03:34:59.423460849 +0000 UTC m=+145.989155584" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.425471 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-d52pc" podStartSLOduration=125.42546509 podStartE2EDuration="2m5.42546509s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:59.398157245 +0000 UTC m=+145.963851990" watchObservedRunningTime="2026-01-07 03:34:59.42546509 +0000 UTC m=+145.991159825" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.444272 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gx94v" podStartSLOduration=125.444254555 podStartE2EDuration="2m5.444254555s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:34:59.443351237 +0000 UTC m=+146.009045972" watchObservedRunningTime="2026-01-07 03:34:59.444254555 +0000 UTC m=+146.009949290" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.512329 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:59 crc kubenswrapper[4980]: E0107 03:34:59.512851 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:35:00.012835912 +0000 UTC m=+146.578530637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.613205 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:59 crc kubenswrapper[4980]: E0107 03:34:59.613319 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:35:00.113304745 +0000 UTC m=+146.678999480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.614014 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:59 crc kubenswrapper[4980]: E0107 03:34:59.614323 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:35:00.114316136 +0000 UTC m=+146.680010871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.668523 4980 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.714804 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:59 crc kubenswrapper[4980]: E0107 03:34:59.715175 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:35:00.21515959 +0000 UTC m=+146.780854325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.816621 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:59 crc kubenswrapper[4980]: E0107 03:34:59.816930 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:35:00.316918921 +0000 UTC m=+146.882613656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.853024 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2bfnr"] Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.853932 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bfnr" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.859073 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.876353 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2bfnr"] Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.918526 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:34:59 crc kubenswrapper[4980]: E0107 03:34:59.918731 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:35:00.418690644 +0000 UTC m=+146.984385369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.919309 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7749febc-7b8f-4a6b-96e8-2579f281cede-utilities\") pod \"certified-operators-2bfnr\" (UID: \"7749febc-7b8f-4a6b-96e8-2579f281cede\") " pod="openshift-marketplace/certified-operators-2bfnr" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.919373 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.919453 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnp6h\" (UniqueName: \"kubernetes.io/projected/7749febc-7b8f-4a6b-96e8-2579f281cede-kube-api-access-vnp6h\") pod \"certified-operators-2bfnr\" (UID: \"7749febc-7b8f-4a6b-96e8-2579f281cede\") " pod="openshift-marketplace/certified-operators-2bfnr" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.919527 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7749febc-7b8f-4a6b-96e8-2579f281cede-catalog-content\") pod \"certified-operators-2bfnr\" (UID: \"7749febc-7b8f-4a6b-96e8-2579f281cede\") " pod="openshift-marketplace/certified-operators-2bfnr" Jan 07 03:34:59 crc kubenswrapper[4980]: E0107 03:34:59.919682 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:35:00.419663424 +0000 UTC m=+146.985358159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.929917 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.995752 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" event={"ID":"cce4a570-94cd-4548-9682-7d69c980686a","Type":"ContainerStarted","Data":"8252e008913a692444b1d486e65e6fe1c9c58874cbca0aa783b71c9523a4a1df"} Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.995815 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" event={"ID":"cce4a570-94cd-4548-9682-7d69c980686a","Type":"ContainerStarted","Data":"eeb390eac52aea383a781dc33be2feaddd62d4c55424a20c398464065a1e7ed3"} Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.997737 4980 patch_prober.go:28] interesting pod/downloads-7954f5f757-hn52l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 07 03:34:59 crc kubenswrapper[4980]: I0107 03:34:59.997820 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hn52l" podUID="aa545c14-ac24-490a-a488-9d26b26e6ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.004157 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g6pfb" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.005014 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.020381 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.020607 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7749febc-7b8f-4a6b-96e8-2579f281cede-catalog-content\") pod \"certified-operators-2bfnr\" (UID: \"7749febc-7b8f-4a6b-96e8-2579f281cede\") " pod="openshift-marketplace/certified-operators-2bfnr" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.020641 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7749febc-7b8f-4a6b-96e8-2579f281cede-utilities\") pod \"certified-operators-2bfnr\" (UID: \"7749febc-7b8f-4a6b-96e8-2579f281cede\") " pod="openshift-marketplace/certified-operators-2bfnr" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.020722 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnp6h\" (UniqueName: \"kubernetes.io/projected/7749febc-7b8f-4a6b-96e8-2579f281cede-kube-api-access-vnp6h\") pod \"certified-operators-2bfnr\" (UID: \"7749febc-7b8f-4a6b-96e8-2579f281cede\") " pod="openshift-marketplace/certified-operators-2bfnr" Jan 07 03:35:00 crc kubenswrapper[4980]: E0107 03:35:00.020884 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:35:00.520856969 +0000 UTC m=+147.086551704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.021262 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7749febc-7b8f-4a6b-96e8-2579f281cede-utilities\") pod \"certified-operators-2bfnr\" (UID: \"7749febc-7b8f-4a6b-96e8-2579f281cede\") " pod="openshift-marketplace/certified-operators-2bfnr" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.021353 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7749febc-7b8f-4a6b-96e8-2579f281cede-catalog-content\") pod \"certified-operators-2bfnr\" (UID: \"7749febc-7b8f-4a6b-96e8-2579f281cede\") " pod="openshift-marketplace/certified-operators-2bfnr" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.037011 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-slxp5"] Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.037911 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-slxp5" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.042523 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.053325 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-slxp5"] Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.057300 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnp6h\" (UniqueName: \"kubernetes.io/projected/7749febc-7b8f-4a6b-96e8-2579f281cede-kube-api-access-vnp6h\") pod \"certified-operators-2bfnr\" (UID: \"7749febc-7b8f-4a6b-96e8-2579f281cede\") " pod="openshift-marketplace/certified-operators-2bfnr" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.122755 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.123009 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83605c82-2947-4e84-8657-e9d040571dae-catalog-content\") pod \"community-operators-slxp5\" (UID: \"83605c82-2947-4e84-8657-e9d040571dae\") " pod="openshift-marketplace/community-operators-slxp5" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.123267 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83605c82-2947-4e84-8657-e9d040571dae-utilities\") pod \"community-operators-slxp5\" (UID: \"83605c82-2947-4e84-8657-e9d040571dae\") " pod="openshift-marketplace/community-operators-slxp5" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.123395 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7rkf\" (UniqueName: \"kubernetes.io/projected/83605c82-2947-4e84-8657-e9d040571dae-kube-api-access-p7rkf\") pod \"community-operators-slxp5\" (UID: \"83605c82-2947-4e84-8657-e9d040571dae\") " pod="openshift-marketplace/community-operators-slxp5" Jan 07 03:35:00 crc kubenswrapper[4980]: E0107 03:35:00.130932 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:35:00.630908793 +0000 UTC m=+147.196603528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.143489 4980 patch_prober.go:28] interesting pod/router-default-5444994796-fn7tq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 03:35:00 crc kubenswrapper[4980]: [-]has-synced failed: reason withheld Jan 07 03:35:00 crc kubenswrapper[4980]: [+]process-running ok Jan 07 03:35:00 crc kubenswrapper[4980]: healthz check failed Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.143601 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fn7tq" podUID="2a6e4d95-beda-46e5-8030-8f4f590cc22e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.178153 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bfnr" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.232245 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:35:00 crc kubenswrapper[4980]: E0107 03:35:00.232505 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:35:00.732450689 +0000 UTC m=+147.298145424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.232599 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83605c82-2947-4e84-8657-e9d040571dae-catalog-content\") pod \"community-operators-slxp5\" (UID: \"83605c82-2947-4e84-8657-e9d040571dae\") " pod="openshift-marketplace/community-operators-slxp5" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.232750 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83605c82-2947-4e84-8657-e9d040571dae-utilities\") pod \"community-operators-slxp5\" (UID: \"83605c82-2947-4e84-8657-e9d040571dae\") " pod="openshift-marketplace/community-operators-slxp5" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.232805 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7rkf\" (UniqueName: \"kubernetes.io/projected/83605c82-2947-4e84-8657-e9d040571dae-kube-api-access-p7rkf\") pod \"community-operators-slxp5\" (UID: \"83605c82-2947-4e84-8657-e9d040571dae\") " pod="openshift-marketplace/community-operators-slxp5" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.232985 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.233343 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83605c82-2947-4e84-8657-e9d040571dae-utilities\") pod \"community-operators-slxp5\" (UID: \"83605c82-2947-4e84-8657-e9d040571dae\") " pod="openshift-marketplace/community-operators-slxp5" Jan 07 03:35:00 crc kubenswrapper[4980]: E0107 03:35:00.233521 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:35:00.733513452 +0000 UTC m=+147.299208187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.233705 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83605c82-2947-4e84-8657-e9d040571dae-catalog-content\") pod \"community-operators-slxp5\" (UID: \"83605c82-2947-4e84-8657-e9d040571dae\") " pod="openshift-marketplace/community-operators-slxp5" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.239642 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-42gz5"] Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.245340 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42gz5" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.256764 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-42gz5"] Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.266186 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7rkf\" (UniqueName: \"kubernetes.io/projected/83605c82-2947-4e84-8657-e9d040571dae-kube-api-access-p7rkf\") pod \"community-operators-slxp5\" (UID: \"83605c82-2947-4e84-8657-e9d040571dae\") " pod="openshift-marketplace/community-operators-slxp5" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.340778 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.340988 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52350cdf-a327-410e-89cf-6666175b6ddc-catalog-content\") pod \"certified-operators-42gz5\" (UID: \"52350cdf-a327-410e-89cf-6666175b6ddc\") " pod="openshift-marketplace/certified-operators-42gz5" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.341078 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c72lr\" (UniqueName: \"kubernetes.io/projected/52350cdf-a327-410e-89cf-6666175b6ddc-kube-api-access-c72lr\") pod \"certified-operators-42gz5\" (UID: \"52350cdf-a327-410e-89cf-6666175b6ddc\") " pod="openshift-marketplace/certified-operators-42gz5" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.341133 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52350cdf-a327-410e-89cf-6666175b6ddc-utilities\") pod \"certified-operators-42gz5\" (UID: \"52350cdf-a327-410e-89cf-6666175b6ddc\") " pod="openshift-marketplace/certified-operators-42gz5" Jan 07 03:35:00 crc kubenswrapper[4980]: E0107 03:35:00.341232 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-07 03:35:00.841217875 +0000 UTC m=+147.406912610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.351489 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-slxp5" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.428522 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dphcj"] Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.429527 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dphcj" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.442486 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c72lr\" (UniqueName: \"kubernetes.io/projected/52350cdf-a327-410e-89cf-6666175b6ddc-kube-api-access-c72lr\") pod \"certified-operators-42gz5\" (UID: \"52350cdf-a327-410e-89cf-6666175b6ddc\") " pod="openshift-marketplace/certified-operators-42gz5" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.442541 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.442589 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dphcj"] Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.442644 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52350cdf-a327-410e-89cf-6666175b6ddc-utilities\") pod \"certified-operators-42gz5\" (UID: \"52350cdf-a327-410e-89cf-6666175b6ddc\") " pod="openshift-marketplace/certified-operators-42gz5" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.442725 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52350cdf-a327-410e-89cf-6666175b6ddc-catalog-content\") pod \"certified-operators-42gz5\" (UID: \"52350cdf-a327-410e-89cf-6666175b6ddc\") " pod="openshift-marketplace/certified-operators-42gz5" Jan 07 03:35:00 crc kubenswrapper[4980]: E0107 03:35:00.443064 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-07 03:35:00.94304981 +0000 UTC m=+147.508744545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nql9v" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.443415 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52350cdf-a327-410e-89cf-6666175b6ddc-utilities\") pod \"certified-operators-42gz5\" (UID: \"52350cdf-a327-410e-89cf-6666175b6ddc\") " pod="openshift-marketplace/certified-operators-42gz5" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.443717 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52350cdf-a327-410e-89cf-6666175b6ddc-catalog-content\") pod \"certified-operators-42gz5\" (UID: \"52350cdf-a327-410e-89cf-6666175b6ddc\") " pod="openshift-marketplace/certified-operators-42gz5" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.464475 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c72lr\" (UniqueName: \"kubernetes.io/projected/52350cdf-a327-410e-89cf-6666175b6ddc-kube-api-access-c72lr\") pod \"certified-operators-42gz5\" (UID: \"52350cdf-a327-410e-89cf-6666175b6ddc\") " pod="openshift-marketplace/certified-operators-42gz5" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.528752 4980 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-07T03:34:59.668548864Z","Handler":null,"Name":""} Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.534744 4980 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.534787 4980 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.545147 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.545709 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cad7956-ecee-468f-9fbe-b8a99a646cfb-catalog-content\") pod \"community-operators-dphcj\" (UID: \"9cad7956-ecee-468f-9fbe-b8a99a646cfb\") " pod="openshift-marketplace/community-operators-dphcj" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.545750 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cad7956-ecee-468f-9fbe-b8a99a646cfb-utilities\") pod \"community-operators-dphcj\" (UID: \"9cad7956-ecee-468f-9fbe-b8a99a646cfb\") " pod="openshift-marketplace/community-operators-dphcj" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.545801 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmpcf\" (UniqueName: \"kubernetes.io/projected/9cad7956-ecee-468f-9fbe-b8a99a646cfb-kube-api-access-vmpcf\") pod \"community-operators-dphcj\" (UID: \"9cad7956-ecee-468f-9fbe-b8a99a646cfb\") " pod="openshift-marketplace/community-operators-dphcj" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.561676 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.564540 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2bfnr"] Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.567388 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42gz5" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.647158 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cad7956-ecee-468f-9fbe-b8a99a646cfb-catalog-content\") pod \"community-operators-dphcj\" (UID: \"9cad7956-ecee-468f-9fbe-b8a99a646cfb\") " pod="openshift-marketplace/community-operators-dphcj" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.647196 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cad7956-ecee-468f-9fbe-b8a99a646cfb-utilities\") pod \"community-operators-dphcj\" (UID: \"9cad7956-ecee-468f-9fbe-b8a99a646cfb\") " pod="openshift-marketplace/community-operators-dphcj" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.647217 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.647244 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.647274 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmpcf\" (UniqueName: \"kubernetes.io/projected/9cad7956-ecee-468f-9fbe-b8a99a646cfb-kube-api-access-vmpcf\") pod \"community-operators-dphcj\" (UID: \"9cad7956-ecee-468f-9fbe-b8a99a646cfb\") " pod="openshift-marketplace/community-operators-dphcj" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.647292 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.647310 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.648174 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cad7956-ecee-468f-9fbe-b8a99a646cfb-catalog-content\") pod \"community-operators-dphcj\" (UID: \"9cad7956-ecee-468f-9fbe-b8a99a646cfb\") " pod="openshift-marketplace/community-operators-dphcj" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.648415 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cad7956-ecee-468f-9fbe-b8a99a646cfb-utilities\") pod \"community-operators-dphcj\" (UID: \"9cad7956-ecee-468f-9fbe-b8a99a646cfb\") " pod="openshift-marketplace/community-operators-dphcj" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.650651 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.652282 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.653369 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.654938 4980 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.654967 4980 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.659611 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.672429 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmpcf\" (UniqueName: \"kubernetes.io/projected/9cad7956-ecee-468f-9fbe-b8a99a646cfb-kube-api-access-vmpcf\") pod \"community-operators-dphcj\" (UID: \"9cad7956-ecee-468f-9fbe-b8a99a646cfb\") " pod="openshift-marketplace/community-operators-dphcj" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.701147 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.702223 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-slxp5"] Jan 07 03:35:00 crc kubenswrapper[4980]: W0107 03:35:00.721464 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83605c82_2947_4e84_8657_e9d040571dae.slice/crio-7a063e83305c6dceeffaa05c82a5b8a487b7df622f9b5b73f3d32d6593dabe7b WatchSource:0}: Error finding container 7a063e83305c6dceeffaa05c82a5b8a487b7df622f9b5b73f3d32d6593dabe7b: Status 404 returned error can't find the container with id 7a063e83305c6dceeffaa05c82a5b8a487b7df622f9b5b73f3d32d6593dabe7b Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.735908 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nql9v\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.748389 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.755088 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.810746 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dphcj" Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.852454 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-42gz5"] Jan 07 03:35:00 crc kubenswrapper[4980]: W0107 03:35:00.870837 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52350cdf_a327_410e_89cf_6666175b6ddc.slice/crio-cc420900443e6edc5357e944f00f135ab72783abe1763bd95383c1be6e43aadb WatchSource:0}: Error finding container cc420900443e6edc5357e944f00f135ab72783abe1763bd95383c1be6e43aadb: Status 404 returned error can't find the container with id cc420900443e6edc5357e944f00f135ab72783abe1763bd95383c1be6e43aadb Jan 07 03:35:00 crc kubenswrapper[4980]: I0107 03:35:00.989269 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.002894 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.012996 4980 generic.go:334] "Generic (PLEG): container finished" podID="7749febc-7b8f-4a6b-96e8-2579f281cede" containerID="6018d287f164c0533c50b3a376b1308d0c74e9fdc3c8ccd52cd2fb8e29d72de5" exitCode=0 Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.013080 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bfnr" event={"ID":"7749febc-7b8f-4a6b-96e8-2579f281cede","Type":"ContainerDied","Data":"6018d287f164c0533c50b3a376b1308d0c74e9fdc3c8ccd52cd2fb8e29d72de5"} Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.013104 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bfnr" event={"ID":"7749febc-7b8f-4a6b-96e8-2579f281cede","Type":"ContainerStarted","Data":"f273c7007e67c918cb1d5380a9b70e75e513afd6d5951cf313ad478d4d627d32"} Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.021121 4980 generic.go:334] "Generic (PLEG): container finished" podID="7a5331ee-46c7-4826-85cf-3c57f25f1d6c" containerID="21b2b1e6fd1c40ca83a8857e70933c9a9dea5971e8af4944cd7932c451885613" exitCode=0 Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.021195 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" event={"ID":"7a5331ee-46c7-4826-85cf-3c57f25f1d6c","Type":"ContainerDied","Data":"21b2b1e6fd1c40ca83a8857e70933c9a9dea5971e8af4944cd7932c451885613"} Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.024311 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" event={"ID":"cce4a570-94cd-4548-9682-7d69c980686a","Type":"ContainerStarted","Data":"9e39b6692d770d6c5c5c70114af5bd581cc21c9e3376bff1f83ff1501a3784b5"} Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.033257 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-slxp5" event={"ID":"83605c82-2947-4e84-8657-e9d040571dae","Type":"ContainerStarted","Data":"7a063e83305c6dceeffaa05c82a5b8a487b7df622f9b5b73f3d32d6593dabe7b"} Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.044601 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42gz5" event={"ID":"52350cdf-a327-410e-89cf-6666175b6ddc","Type":"ContainerStarted","Data":"cc420900443e6edc5357e944f00f135ab72783abe1763bd95383c1be6e43aadb"} Jan 07 03:35:01 crc kubenswrapper[4980]: W0107 03:35:01.052511 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-bce5ba6dcac0b46476e7e27bed3217fc671bd430b2eaa19ca807cff206c120f2 WatchSource:0}: Error finding container bce5ba6dcac0b46476e7e27bed3217fc671bd430b2eaa19ca807cff206c120f2: Status 404 returned error can't find the container with id bce5ba6dcac0b46476e7e27bed3217fc671bd430b2eaa19ca807cff206c120f2 Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.055369 4980 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.068910 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6nqsq" podStartSLOduration=12.068892248 podStartE2EDuration="12.068892248s" podCreationTimestamp="2026-01-07 03:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:35:01.064487814 +0000 UTC m=+147.630182549" watchObservedRunningTime="2026-01-07 03:35:01.068892248 +0000 UTC m=+147.634586983" Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.084277 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dphcj"] Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.142070 4980 patch_prober.go:28] interesting pod/router-default-5444994796-fn7tq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 03:35:01 crc kubenswrapper[4980]: [-]has-synced failed: reason withheld Jan 07 03:35:01 crc kubenswrapper[4980]: [+]process-running ok Jan 07 03:35:01 crc kubenswrapper[4980]: healthz check failed Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.142132 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fn7tq" podUID="2a6e4d95-beda-46e5-8030-8f4f590cc22e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 03:35:01 crc kubenswrapper[4980]: W0107 03:35:01.315389 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-15dcbba8e4839c6d4ba11ccc13489d7da4141155b13eee03f34377e9e5d3171f WatchSource:0}: Error finding container 15dcbba8e4839c6d4ba11ccc13489d7da4141155b13eee03f34377e9e5d3171f: Status 404 returned error can't find the container with id 15dcbba8e4839c6d4ba11ccc13489d7da4141155b13eee03f34377e9e5d3171f Jan 07 03:35:01 crc kubenswrapper[4980]: W0107 03:35:01.318326 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-61862e8ed4b8f4a39c5bad192c44430b6fc6f9e2e2ba9133f4e081ac97980213 WatchSource:0}: Error finding container 61862e8ed4b8f4a39c5bad192c44430b6fc6f9e2e2ba9133f4e081ac97980213: Status 404 returned error can't find the container with id 61862e8ed4b8f4a39c5bad192c44430b6fc6f9e2e2ba9133f4e081ac97980213 Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.604892 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nql9v"] Jan 07 03:35:01 crc kubenswrapper[4980]: W0107 03:35:01.624367 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4318c07_8e55_4555_bebb_297c5bb68e73.slice/crio-8c5059b2628b5c74f746db8aaa5228d57dc7a10fa88bad35c71178c3e7e00eef WatchSource:0}: Error finding container 8c5059b2628b5c74f746db8aaa5228d57dc7a10fa88bad35c71178c3e7e00eef: Status 404 returned error can't find the container with id 8c5059b2628b5c74f746db8aaa5228d57dc7a10fa88bad35c71178c3e7e00eef Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.740896 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.828526 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9c5kx"] Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.830285 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c5kx" Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.832329 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.841187 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c5kx"] Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.865755 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16999f3a-e9cd-449c-bb6f-72b7759cb32e-utilities\") pod \"redhat-marketplace-9c5kx\" (UID: \"16999f3a-e9cd-449c-bb6f-72b7759cb32e\") " pod="openshift-marketplace/redhat-marketplace-9c5kx" Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.865820 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16999f3a-e9cd-449c-bb6f-72b7759cb32e-catalog-content\") pod \"redhat-marketplace-9c5kx\" (UID: \"16999f3a-e9cd-449c-bb6f-72b7759cb32e\") " pod="openshift-marketplace/redhat-marketplace-9c5kx" Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.865863 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfql8\" (UniqueName: \"kubernetes.io/projected/16999f3a-e9cd-449c-bb6f-72b7759cb32e-kube-api-access-rfql8\") pod \"redhat-marketplace-9c5kx\" (UID: \"16999f3a-e9cd-449c-bb6f-72b7759cb32e\") " pod="openshift-marketplace/redhat-marketplace-9c5kx" Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.967274 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16999f3a-e9cd-449c-bb6f-72b7759cb32e-utilities\") pod \"redhat-marketplace-9c5kx\" (UID: \"16999f3a-e9cd-449c-bb6f-72b7759cb32e\") " pod="openshift-marketplace/redhat-marketplace-9c5kx" Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.967359 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16999f3a-e9cd-449c-bb6f-72b7759cb32e-catalog-content\") pod \"redhat-marketplace-9c5kx\" (UID: \"16999f3a-e9cd-449c-bb6f-72b7759cb32e\") " pod="openshift-marketplace/redhat-marketplace-9c5kx" Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.967419 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfql8\" (UniqueName: \"kubernetes.io/projected/16999f3a-e9cd-449c-bb6f-72b7759cb32e-kube-api-access-rfql8\") pod \"redhat-marketplace-9c5kx\" (UID: \"16999f3a-e9cd-449c-bb6f-72b7759cb32e\") " pod="openshift-marketplace/redhat-marketplace-9c5kx" Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.967947 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16999f3a-e9cd-449c-bb6f-72b7759cb32e-utilities\") pod \"redhat-marketplace-9c5kx\" (UID: \"16999f3a-e9cd-449c-bb6f-72b7759cb32e\") " pod="openshift-marketplace/redhat-marketplace-9c5kx" Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.968135 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16999f3a-e9cd-449c-bb6f-72b7759cb32e-catalog-content\") pod \"redhat-marketplace-9c5kx\" (UID: \"16999f3a-e9cd-449c-bb6f-72b7759cb32e\") " pod="openshift-marketplace/redhat-marketplace-9c5kx" Jan 07 03:35:01 crc kubenswrapper[4980]: I0107 03:35:01.998458 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfql8\" (UniqueName: \"kubernetes.io/projected/16999f3a-e9cd-449c-bb6f-72b7759cb32e-kube-api-access-rfql8\") pod \"redhat-marketplace-9c5kx\" (UID: \"16999f3a-e9cd-449c-bb6f-72b7759cb32e\") " pod="openshift-marketplace/redhat-marketplace-9c5kx" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.058669 4980 generic.go:334] "Generic (PLEG): container finished" podID="83605c82-2947-4e84-8657-e9d040571dae" containerID="5ad0d0723787d429fc22e1e570d9a2eb355d47cfbe7b69fe9babe3bbae2a05de" exitCode=0 Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.058773 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-slxp5" event={"ID":"83605c82-2947-4e84-8657-e9d040571dae","Type":"ContainerDied","Data":"5ad0d0723787d429fc22e1e570d9a2eb355d47cfbe7b69fe9babe3bbae2a05de"} Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.065754 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1e89ccdb39846be984ae117d6beaf8784aae1f951b01d5b3bfc20a2921873169"} Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.065824 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"15dcbba8e4839c6d4ba11ccc13489d7da4141155b13eee03f34377e9e5d3171f"} Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.066431 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.066491 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.066504 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.070841 4980 generic.go:334] "Generic (PLEG): container finished" podID="52350cdf-a327-410e-89cf-6666175b6ddc" containerID="c4e6f2555b3b53d674785252d618d956b8bb184968df65f278f94154677cfc7d" exitCode=0 Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.070894 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42gz5" event={"ID":"52350cdf-a327-410e-89cf-6666175b6ddc","Type":"ContainerDied","Data":"c4e6f2555b3b53d674785252d618d956b8bb184968df65f278f94154677cfc7d"} Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.073600 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" event={"ID":"d4318c07-8e55-4555-bebb-297c5bb68e73","Type":"ContainerStarted","Data":"f2a7d9f5a36b5c3ff06194df911da162c3e195fbf79bac38f7530202ce1a8171"} Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.073625 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" event={"ID":"d4318c07-8e55-4555-bebb-297c5bb68e73","Type":"ContainerStarted","Data":"8c5059b2628b5c74f746db8aaa5228d57dc7a10fa88bad35c71178c3e7e00eef"} Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.073995 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.076759 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f731998cefc20ee5c8224604786167dbac466625ed4357415c63eb72872e83db"} Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.076809 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"61862e8ed4b8f4a39c5bad192c44430b6fc6f9e2e2ba9133f4e081ac97980213"} Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.079005 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6b96ff74174fbf8c0a895575c9bfa2b6ba8542a8e84676c5fc18c08a150e5541"} Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.079083 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bce5ba6dcac0b46476e7e27bed3217fc671bd430b2eaa19ca807cff206c120f2"} Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.082185 4980 generic.go:334] "Generic (PLEG): container finished" podID="9cad7956-ecee-468f-9fbe-b8a99a646cfb" containerID="e0aa69d618effa9483d5a8cae443a9262c3febf197b52708a9e6fcbfb4fde0fe" exitCode=0 Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.083955 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dphcj" event={"ID":"9cad7956-ecee-468f-9fbe-b8a99a646cfb","Type":"ContainerDied","Data":"e0aa69d618effa9483d5a8cae443a9262c3febf197b52708a9e6fcbfb4fde0fe"} Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.083995 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dphcj" event={"ID":"9cad7956-ecee-468f-9fbe-b8a99a646cfb","Type":"ContainerStarted","Data":"5a242f59c47029492a789d1a0c54ef259a5f15de7bd120d6ba2b2e708481d716"} Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.085442 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.132376 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.133780 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.140380 4980 patch_prober.go:28] interesting pod/router-default-5444994796-fn7tq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 03:35:02 crc kubenswrapper[4980]: [-]has-synced failed: reason withheld Jan 07 03:35:02 crc kubenswrapper[4980]: [+]process-running ok Jan 07 03:35:02 crc kubenswrapper[4980]: healthz check failed Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.140434 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fn7tq" podUID="2a6e4d95-beda-46e5-8030-8f4f590cc22e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.151303 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.154883 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c5kx" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.176766 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.176809 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.182715 4980 patch_prober.go:28] interesting pod/console-f9d7485db-46blp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.182794 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-46blp" podUID="063cfd7b-7d93-45bc-a374-99b5e204b200" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.251594 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xdtx8"] Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.252880 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdtx8" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.254544 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" podStartSLOduration=128.254521907 podStartE2EDuration="2m8.254521907s" podCreationTimestamp="2026-01-07 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:35:02.248241724 +0000 UTC m=+148.813936459" watchObservedRunningTime="2026-01-07 03:35:02.254521907 +0000 UTC m=+148.820216642" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.330649 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdtx8"] Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.379012 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9447f0-10ac-4740-a5a6-618b454b89be-catalog-content\") pod \"redhat-marketplace-xdtx8\" (UID: \"fd9447f0-10ac-4740-a5a6-618b454b89be\") " pod="openshift-marketplace/redhat-marketplace-xdtx8" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.379078 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9447f0-10ac-4740-a5a6-618b454b89be-utilities\") pod \"redhat-marketplace-xdtx8\" (UID: \"fd9447f0-10ac-4740-a5a6-618b454b89be\") " pod="openshift-marketplace/redhat-marketplace-xdtx8" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.379110 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szg8z\" (UniqueName: \"kubernetes.io/projected/fd9447f0-10ac-4740-a5a6-618b454b89be-kube-api-access-szg8z\") pod \"redhat-marketplace-xdtx8\" (UID: \"fd9447f0-10ac-4740-a5a6-618b454b89be\") " pod="openshift-marketplace/redhat-marketplace-xdtx8" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.467293 4980 patch_prober.go:28] interesting pod/downloads-7954f5f757-hn52l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.467338 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hn52l" podUID="aa545c14-ac24-490a-a488-9d26b26e6ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.467951 4980 patch_prober.go:28] interesting pod/downloads-7954f5f757-hn52l container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.468031 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hn52l" podUID="aa545c14-ac24-490a-a488-9d26b26e6ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.481069 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9447f0-10ac-4740-a5a6-618b454b89be-catalog-content\") pod \"redhat-marketplace-xdtx8\" (UID: \"fd9447f0-10ac-4740-a5a6-618b454b89be\") " pod="openshift-marketplace/redhat-marketplace-xdtx8" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.481114 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9447f0-10ac-4740-a5a6-618b454b89be-utilities\") pod \"redhat-marketplace-xdtx8\" (UID: \"fd9447f0-10ac-4740-a5a6-618b454b89be\") " pod="openshift-marketplace/redhat-marketplace-xdtx8" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.481135 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szg8z\" (UniqueName: \"kubernetes.io/projected/fd9447f0-10ac-4740-a5a6-618b454b89be-kube-api-access-szg8z\") pod \"redhat-marketplace-xdtx8\" (UID: \"fd9447f0-10ac-4740-a5a6-618b454b89be\") " pod="openshift-marketplace/redhat-marketplace-xdtx8" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.481831 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9447f0-10ac-4740-a5a6-618b454b89be-catalog-content\") pod \"redhat-marketplace-xdtx8\" (UID: \"fd9447f0-10ac-4740-a5a6-618b454b89be\") " pod="openshift-marketplace/redhat-marketplace-xdtx8" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.482158 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9447f0-10ac-4740-a5a6-618b454b89be-utilities\") pod \"redhat-marketplace-xdtx8\" (UID: \"fd9447f0-10ac-4740-a5a6-618b454b89be\") " pod="openshift-marketplace/redhat-marketplace-xdtx8" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.493857 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.509040 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szg8z\" (UniqueName: \"kubernetes.io/projected/fd9447f0-10ac-4740-a5a6-618b454b89be-kube-api-access-szg8z\") pod \"redhat-marketplace-xdtx8\" (UID: \"fd9447f0-10ac-4740-a5a6-618b454b89be\") " pod="openshift-marketplace/redhat-marketplace-xdtx8" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.575740 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdtx8" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.683467 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-config-volume\") pod \"7a5331ee-46c7-4826-85cf-3c57f25f1d6c\" (UID: \"7a5331ee-46c7-4826-85cf-3c57f25f1d6c\") " Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.683880 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fxxs\" (UniqueName: \"kubernetes.io/projected/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-kube-api-access-5fxxs\") pod \"7a5331ee-46c7-4826-85cf-3c57f25f1d6c\" (UID: \"7a5331ee-46c7-4826-85cf-3c57f25f1d6c\") " Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.683901 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-secret-volume\") pod \"7a5331ee-46c7-4826-85cf-3c57f25f1d6c\" (UID: \"7a5331ee-46c7-4826-85cf-3c57f25f1d6c\") " Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.696757 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-config-volume" (OuterVolumeSpecName: "config-volume") pod "7a5331ee-46c7-4826-85cf-3c57f25f1d6c" (UID: "7a5331ee-46c7-4826-85cf-3c57f25f1d6c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.697699 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c5kx"] Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.702697 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7a5331ee-46c7-4826-85cf-3c57f25f1d6c" (UID: "7a5331ee-46c7-4826-85cf-3c57f25f1d6c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.703109 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-kube-api-access-5fxxs" (OuterVolumeSpecName: "kube-api-access-5fxxs") pod "7a5331ee-46c7-4826-85cf-3c57f25f1d6c" (UID: "7a5331ee-46c7-4826-85cf-3c57f25f1d6c"). InnerVolumeSpecName "kube-api-access-5fxxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:35:02 crc kubenswrapper[4980]: W0107 03:35:02.709116 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16999f3a_e9cd_449c_bb6f_72b7759cb32e.slice/crio-37228e7429e05af6303cad94624f5c40a5a04f09e542e5de5130f8b61b3bdbd0 WatchSource:0}: Error finding container 37228e7429e05af6303cad94624f5c40a5a04f09e542e5de5130f8b61b3bdbd0: Status 404 returned error can't find the container with id 37228e7429e05af6303cad94624f5c40a5a04f09e542e5de5130f8b61b3bdbd0 Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.785867 4980 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.785896 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fxxs\" (UniqueName: \"kubernetes.io/projected/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-kube-api-access-5fxxs\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.785907 4980 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a5331ee-46c7-4826-85cf-3c57f25f1d6c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:02 crc kubenswrapper[4980]: I0107 03:35:02.854874 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdtx8"] Jan 07 03:35:02 crc kubenswrapper[4980]: W0107 03:35:02.875906 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd9447f0_10ac_4740_a5a6_618b454b89be.slice/crio-524db72c0ebbacfac31fd37fb16916d3fb240500791c38bb576dafdc822e4015 WatchSource:0}: Error finding container 524db72c0ebbacfac31fd37fb16916d3fb240500791c38bb576dafdc822e4015: Status 404 returned error can't find the container with id 524db72c0ebbacfac31fd37fb16916d3fb240500791c38bb576dafdc822e4015 Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.037842 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k2mq9"] Jan 07 03:35:03 crc kubenswrapper[4980]: E0107 03:35:03.038463 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5331ee-46c7-4826-85cf-3c57f25f1d6c" containerName="collect-profiles" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.038481 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5331ee-46c7-4826-85cf-3c57f25f1d6c" containerName="collect-profiles" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.038591 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a5331ee-46c7-4826-85cf-3c57f25f1d6c" containerName="collect-profiles" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.039318 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2mq9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.041885 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.062136 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2mq9"] Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.099874 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7bdh\" (UniqueName: \"kubernetes.io/projected/de1dad58-7ec1-4867-8494-044120bf894b-kube-api-access-n7bdh\") pod \"redhat-operators-k2mq9\" (UID: \"de1dad58-7ec1-4867-8494-044120bf894b\") " pod="openshift-marketplace/redhat-operators-k2mq9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.099922 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1dad58-7ec1-4867-8494-044120bf894b-utilities\") pod \"redhat-operators-k2mq9\" (UID: \"de1dad58-7ec1-4867-8494-044120bf894b\") " pod="openshift-marketplace/redhat-operators-k2mq9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.099982 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1dad58-7ec1-4867-8494-044120bf894b-catalog-content\") pod \"redhat-operators-k2mq9\" (UID: \"de1dad58-7ec1-4867-8494-044120bf894b\") " pod="openshift-marketplace/redhat-operators-k2mq9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.105009 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdtx8" event={"ID":"fd9447f0-10ac-4740-a5a6-618b454b89be","Type":"ContainerStarted","Data":"524db72c0ebbacfac31fd37fb16916d3fb240500791c38bb576dafdc822e4015"} Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.112455 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.113083 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.128201 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.128522 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.129677 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" event={"ID":"7a5331ee-46c7-4826-85cf-3c57f25f1d6c","Type":"ContainerDied","Data":"fc755400a2cb9698242489bb7962e204ba9590138602ece38e267482d5016c94"} Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.129709 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc755400a2cb9698242489bb7962e204ba9590138602ece38e267482d5016c94" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.129770 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.134032 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.141544 4980 generic.go:334] "Generic (PLEG): container finished" podID="16999f3a-e9cd-449c-bb6f-72b7759cb32e" containerID="4c9dbc2c1e36fe43cdaa11d8339988f0e98465ff7c64ed3a9d91c56ebf6e35b9" exitCode=0 Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.141708 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c5kx" event={"ID":"16999f3a-e9cd-449c-bb6f-72b7759cb32e","Type":"ContainerDied","Data":"4c9dbc2c1e36fe43cdaa11d8339988f0e98465ff7c64ed3a9d91c56ebf6e35b9"} Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.141781 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c5kx" event={"ID":"16999f3a-e9cd-449c-bb6f-72b7759cb32e","Type":"ContainerStarted","Data":"37228e7429e05af6303cad94624f5c40a5a04f09e542e5de5130f8b61b3bdbd0"} Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.145103 4980 patch_prober.go:28] interesting pod/router-default-5444994796-fn7tq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 03:35:03 crc kubenswrapper[4980]: [-]has-synced failed: reason withheld Jan 07 03:35:03 crc kubenswrapper[4980]: [+]process-running ok Jan 07 03:35:03 crc kubenswrapper[4980]: healthz check failed Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.145150 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fn7tq" podUID="2a6e4d95-beda-46e5-8030-8f4f590cc22e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.165274 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-pt6jg" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.167032 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.168805 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cxgj5" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.201436 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1dad58-7ec1-4867-8494-044120bf894b-utilities\") pod \"redhat-operators-k2mq9\" (UID: \"de1dad58-7ec1-4867-8494-044120bf894b\") " pod="openshift-marketplace/redhat-operators-k2mq9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.201540 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1dad58-7ec1-4867-8494-044120bf894b-catalog-content\") pod \"redhat-operators-k2mq9\" (UID: \"de1dad58-7ec1-4867-8494-044120bf894b\") " pod="openshift-marketplace/redhat-operators-k2mq9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.201615 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7bdh\" (UniqueName: \"kubernetes.io/projected/de1dad58-7ec1-4867-8494-044120bf894b-kube-api-access-n7bdh\") pod \"redhat-operators-k2mq9\" (UID: \"de1dad58-7ec1-4867-8494-044120bf894b\") " pod="openshift-marketplace/redhat-operators-k2mq9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.202473 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1dad58-7ec1-4867-8494-044120bf894b-utilities\") pod \"redhat-operators-k2mq9\" (UID: \"de1dad58-7ec1-4867-8494-044120bf894b\") " pod="openshift-marketplace/redhat-operators-k2mq9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.202732 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1dad58-7ec1-4867-8494-044120bf894b-catalog-content\") pod \"redhat-operators-k2mq9\" (UID: \"de1dad58-7ec1-4867-8494-044120bf894b\") " pod="openshift-marketplace/redhat-operators-k2mq9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.227062 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7bdh\" (UniqueName: \"kubernetes.io/projected/de1dad58-7ec1-4867-8494-044120bf894b-kube-api-access-n7bdh\") pod \"redhat-operators-k2mq9\" (UID: \"de1dad58-7ec1-4867-8494-044120bf894b\") " pod="openshift-marketplace/redhat-operators-k2mq9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.302826 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8085860-d6b8-4018-8497-2d1b7e33eb52-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f8085860-d6b8-4018-8497-2d1b7e33eb52\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.302916 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8085860-d6b8-4018-8497-2d1b7e33eb52-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f8085860-d6b8-4018-8497-2d1b7e33eb52\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.375832 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2mq9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.404900 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8085860-d6b8-4018-8497-2d1b7e33eb52-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f8085860-d6b8-4018-8497-2d1b7e33eb52\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.404945 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8085860-d6b8-4018-8497-2d1b7e33eb52-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f8085860-d6b8-4018-8497-2d1b7e33eb52\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.405107 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8085860-d6b8-4018-8497-2d1b7e33eb52-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f8085860-d6b8-4018-8497-2d1b7e33eb52\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.426386 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8085860-d6b8-4018-8497-2d1b7e33eb52-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f8085860-d6b8-4018-8497-2d1b7e33eb52\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.453597 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-56mp9"] Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.454951 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56mp9"] Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.455039 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56mp9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.528049 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.608041 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptbxm\" (UniqueName: \"kubernetes.io/projected/46100c6d-7c73-47b9-a30e-6409aabaf9db-kube-api-access-ptbxm\") pod \"redhat-operators-56mp9\" (UID: \"46100c6d-7c73-47b9-a30e-6409aabaf9db\") " pod="openshift-marketplace/redhat-operators-56mp9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.608126 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46100c6d-7c73-47b9-a30e-6409aabaf9db-utilities\") pod \"redhat-operators-56mp9\" (UID: \"46100c6d-7c73-47b9-a30e-6409aabaf9db\") " pod="openshift-marketplace/redhat-operators-56mp9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.608158 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46100c6d-7c73-47b9-a30e-6409aabaf9db-catalog-content\") pod \"redhat-operators-56mp9\" (UID: \"46100c6d-7c73-47b9-a30e-6409aabaf9db\") " pod="openshift-marketplace/redhat-operators-56mp9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.710016 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptbxm\" (UniqueName: \"kubernetes.io/projected/46100c6d-7c73-47b9-a30e-6409aabaf9db-kube-api-access-ptbxm\") pod \"redhat-operators-56mp9\" (UID: \"46100c6d-7c73-47b9-a30e-6409aabaf9db\") " pod="openshift-marketplace/redhat-operators-56mp9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.710448 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46100c6d-7c73-47b9-a30e-6409aabaf9db-utilities\") pod \"redhat-operators-56mp9\" (UID: \"46100c6d-7c73-47b9-a30e-6409aabaf9db\") " pod="openshift-marketplace/redhat-operators-56mp9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.710489 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46100c6d-7c73-47b9-a30e-6409aabaf9db-catalog-content\") pod \"redhat-operators-56mp9\" (UID: \"46100c6d-7c73-47b9-a30e-6409aabaf9db\") " pod="openshift-marketplace/redhat-operators-56mp9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.711046 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46100c6d-7c73-47b9-a30e-6409aabaf9db-catalog-content\") pod \"redhat-operators-56mp9\" (UID: \"46100c6d-7c73-47b9-a30e-6409aabaf9db\") " pod="openshift-marketplace/redhat-operators-56mp9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.713693 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46100c6d-7c73-47b9-a30e-6409aabaf9db-utilities\") pod \"redhat-operators-56mp9\" (UID: \"46100c6d-7c73-47b9-a30e-6409aabaf9db\") " pod="openshift-marketplace/redhat-operators-56mp9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.751359 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptbxm\" (UniqueName: \"kubernetes.io/projected/46100c6d-7c73-47b9-a30e-6409aabaf9db-kube-api-access-ptbxm\") pod \"redhat-operators-56mp9\" (UID: \"46100c6d-7c73-47b9-a30e-6409aabaf9db\") " pod="openshift-marketplace/redhat-operators-56mp9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.777718 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56mp9" Jan 07 03:35:03 crc kubenswrapper[4980]: I0107 03:35:03.967962 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.070422 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2mq9"] Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.073681 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56mp9"] Jan 07 03:35:04 crc kubenswrapper[4980]: W0107 03:35:04.116526 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46100c6d_7c73_47b9_a30e_6409aabaf9db.slice/crio-99a3e6f9ac317eebaf3908b87693a531692f47c70648007d83ee72994fe0ef03 WatchSource:0}: Error finding container 99a3e6f9ac317eebaf3908b87693a531692f47c70648007d83ee72994fe0ef03: Status 404 returned error can't find the container with id 99a3e6f9ac317eebaf3908b87693a531692f47c70648007d83ee72994fe0ef03 Jan 07 03:35:04 crc kubenswrapper[4980]: W0107 03:35:04.118081 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde1dad58_7ec1_4867_8494_044120bf894b.slice/crio-d2dc1e1aa59886a8e8d41172167664dddf18a0865244ea746886936b1ecf3498 WatchSource:0}: Error finding container d2dc1e1aa59886a8e8d41172167664dddf18a0865244ea746886936b1ecf3498: Status 404 returned error can't find the container with id d2dc1e1aa59886a8e8d41172167664dddf18a0865244ea746886936b1ecf3498 Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.142146 4980 patch_prober.go:28] interesting pod/router-default-5444994796-fn7tq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 03:35:04 crc kubenswrapper[4980]: [-]has-synced failed: reason withheld Jan 07 03:35:04 crc kubenswrapper[4980]: [+]process-running ok Jan 07 03:35:04 crc kubenswrapper[4980]: healthz check failed Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.142205 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fn7tq" podUID="2a6e4d95-beda-46e5-8030-8f4f590cc22e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.152335 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f8085860-d6b8-4018-8497-2d1b7e33eb52","Type":"ContainerStarted","Data":"64ac324db45e24ce13a183daada8f0a84b1815e1fd249d93d6ff2a76c0e18ce6"} Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.153201 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2mq9" event={"ID":"de1dad58-7ec1-4867-8494-044120bf894b","Type":"ContainerStarted","Data":"d2dc1e1aa59886a8e8d41172167664dddf18a0865244ea746886936b1ecf3498"} Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.160780 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56mp9" event={"ID":"46100c6d-7c73-47b9-a30e-6409aabaf9db","Type":"ContainerStarted","Data":"99a3e6f9ac317eebaf3908b87693a531692f47c70648007d83ee72994fe0ef03"} Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.164003 4980 generic.go:334] "Generic (PLEG): container finished" podID="fd9447f0-10ac-4740-a5a6-618b454b89be" containerID="86d13a3e4e66a8875c95ddda84d491355a05b7b39e8f6d149bb511faf7fa6d2d" exitCode=0 Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.164069 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdtx8" event={"ID":"fd9447f0-10ac-4740-a5a6-618b454b89be","Type":"ContainerDied","Data":"86d13a3e4e66a8875c95ddda84d491355a05b7b39e8f6d149bb511faf7fa6d2d"} Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.694512 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.697464 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.704385 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.708227 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.709361 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.830388 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea443beb-5142-4aa2-aa29-dea478f60ac2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ea443beb-5142-4aa2-aa29-dea478f60ac2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.830848 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea443beb-5142-4aa2-aa29-dea478f60ac2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ea443beb-5142-4aa2-aa29-dea478f60ac2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.932421 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea443beb-5142-4aa2-aa29-dea478f60ac2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ea443beb-5142-4aa2-aa29-dea478f60ac2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.932480 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea443beb-5142-4aa2-aa29-dea478f60ac2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ea443beb-5142-4aa2-aa29-dea478f60ac2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.932665 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea443beb-5142-4aa2-aa29-dea478f60ac2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ea443beb-5142-4aa2-aa29-dea478f60ac2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 07 03:35:04 crc kubenswrapper[4980]: I0107 03:35:04.951890 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea443beb-5142-4aa2-aa29-dea478f60ac2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ea443beb-5142-4aa2-aa29-dea478f60ac2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 07 03:35:05 crc kubenswrapper[4980]: I0107 03:35:05.067412 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 07 03:35:05 crc kubenswrapper[4980]: I0107 03:35:05.135547 4980 patch_prober.go:28] interesting pod/router-default-5444994796-fn7tq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 03:35:05 crc kubenswrapper[4980]: [-]has-synced failed: reason withheld Jan 07 03:35:05 crc kubenswrapper[4980]: [+]process-running ok Jan 07 03:35:05 crc kubenswrapper[4980]: healthz check failed Jan 07 03:35:05 crc kubenswrapper[4980]: I0107 03:35:05.136048 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fn7tq" podUID="2a6e4d95-beda-46e5-8030-8f4f590cc22e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 03:35:05 crc kubenswrapper[4980]: I0107 03:35:05.188278 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f8085860-d6b8-4018-8497-2d1b7e33eb52","Type":"ContainerStarted","Data":"374e08b01487bf9a407acd59a18fb3f0ecef6128b0a94a4ca9251216a495a7e8"} Jan 07 03:35:05 crc kubenswrapper[4980]: I0107 03:35:05.194237 4980 generic.go:334] "Generic (PLEG): container finished" podID="46100c6d-7c73-47b9-a30e-6409aabaf9db" containerID="949f4a9573d52bd958dc0c073bde632161cb174152eee2fe8da757a608cab614" exitCode=0 Jan 07 03:35:05 crc kubenswrapper[4980]: I0107 03:35:05.194305 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56mp9" event={"ID":"46100c6d-7c73-47b9-a30e-6409aabaf9db","Type":"ContainerDied","Data":"949f4a9573d52bd958dc0c073bde632161cb174152eee2fe8da757a608cab614"} Jan 07 03:35:05 crc kubenswrapper[4980]: I0107 03:35:05.202264 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.202250522 podStartE2EDuration="2.202250522s" podCreationTimestamp="2026-01-07 03:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:35:05.200717154 +0000 UTC m=+151.766411889" watchObservedRunningTime="2026-01-07 03:35:05.202250522 +0000 UTC m=+151.767945247" Jan 07 03:35:05 crc kubenswrapper[4980]: I0107 03:35:05.207125 4980 generic.go:334] "Generic (PLEG): container finished" podID="de1dad58-7ec1-4867-8494-044120bf894b" containerID="b01f02352edafaf8b885715177b0c05292e3c86007c32ecc1f2a936e21585215" exitCode=0 Jan 07 03:35:05 crc kubenswrapper[4980]: I0107 03:35:05.207239 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2mq9" event={"ID":"de1dad58-7ec1-4867-8494-044120bf894b","Type":"ContainerDied","Data":"b01f02352edafaf8b885715177b0c05292e3c86007c32ecc1f2a936e21585215"} Jan 07 03:35:05 crc kubenswrapper[4980]: I0107 03:35:05.453594 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 07 03:35:06 crc kubenswrapper[4980]: I0107 03:35:06.140811 4980 patch_prober.go:28] interesting pod/router-default-5444994796-fn7tq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 03:35:06 crc kubenswrapper[4980]: [-]has-synced failed: reason withheld Jan 07 03:35:06 crc kubenswrapper[4980]: [+]process-running ok Jan 07 03:35:06 crc kubenswrapper[4980]: healthz check failed Jan 07 03:35:06 crc kubenswrapper[4980]: I0107 03:35:06.141191 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fn7tq" podUID="2a6e4d95-beda-46e5-8030-8f4f590cc22e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 03:35:06 crc kubenswrapper[4980]: I0107 03:35:06.236125 4980 generic.go:334] "Generic (PLEG): container finished" podID="f8085860-d6b8-4018-8497-2d1b7e33eb52" containerID="374e08b01487bf9a407acd59a18fb3f0ecef6128b0a94a4ca9251216a495a7e8" exitCode=0 Jan 07 03:35:06 crc kubenswrapper[4980]: I0107 03:35:06.236300 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f8085860-d6b8-4018-8497-2d1b7e33eb52","Type":"ContainerDied","Data":"374e08b01487bf9a407acd59a18fb3f0ecef6128b0a94a4ca9251216a495a7e8"} Jan 07 03:35:06 crc kubenswrapper[4980]: I0107 03:35:06.249294 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ea443beb-5142-4aa2-aa29-dea478f60ac2","Type":"ContainerStarted","Data":"dd24719db5d92695dad9ee99b77f6c7b8630f912440b2f0b8d90a38575903aa7"} Jan 07 03:35:06 crc kubenswrapper[4980]: I0107 03:35:06.249420 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ea443beb-5142-4aa2-aa29-dea478f60ac2","Type":"ContainerStarted","Data":"0a23e6756ca87dd6afcd9cce065b7428d1ab64d023ba76e2c955665e067e7871"} Jan 07 03:35:06 crc kubenswrapper[4980]: I0107 03:35:06.273836 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.273819632 podStartE2EDuration="2.273819632s" podCreationTimestamp="2026-01-07 03:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:35:06.270703196 +0000 UTC m=+152.836397931" watchObservedRunningTime="2026-01-07 03:35:06.273819632 +0000 UTC m=+152.839514367" Jan 07 03:35:06 crc kubenswrapper[4980]: I0107 03:35:06.543307 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:35:06 crc kubenswrapper[4980]: I0107 03:35:06.543390 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:35:07 crc kubenswrapper[4980]: I0107 03:35:07.133250 4980 patch_prober.go:28] interesting pod/router-default-5444994796-fn7tq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 03:35:07 crc kubenswrapper[4980]: [-]has-synced failed: reason withheld Jan 07 03:35:07 crc kubenswrapper[4980]: [+]process-running ok Jan 07 03:35:07 crc kubenswrapper[4980]: healthz check failed Jan 07 03:35:07 crc kubenswrapper[4980]: I0107 03:35:07.133584 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fn7tq" podUID="2a6e4d95-beda-46e5-8030-8f4f590cc22e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 03:35:07 crc kubenswrapper[4980]: I0107 03:35:07.616194 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 07 03:35:07 crc kubenswrapper[4980]: I0107 03:35:07.686274 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8085860-d6b8-4018-8497-2d1b7e33eb52-kube-api-access\") pod \"f8085860-d6b8-4018-8497-2d1b7e33eb52\" (UID: \"f8085860-d6b8-4018-8497-2d1b7e33eb52\") " Jan 07 03:35:07 crc kubenswrapper[4980]: I0107 03:35:07.686321 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8085860-d6b8-4018-8497-2d1b7e33eb52-kubelet-dir\") pod \"f8085860-d6b8-4018-8497-2d1b7e33eb52\" (UID: \"f8085860-d6b8-4018-8497-2d1b7e33eb52\") " Jan 07 03:35:07 crc kubenswrapper[4980]: I0107 03:35:07.686752 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8085860-d6b8-4018-8497-2d1b7e33eb52-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f8085860-d6b8-4018-8497-2d1b7e33eb52" (UID: "f8085860-d6b8-4018-8497-2d1b7e33eb52"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:35:07 crc kubenswrapper[4980]: I0107 03:35:07.692650 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8085860-d6b8-4018-8497-2d1b7e33eb52-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f8085860-d6b8-4018-8497-2d1b7e33eb52" (UID: "f8085860-d6b8-4018-8497-2d1b7e33eb52"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:35:07 crc kubenswrapper[4980]: I0107 03:35:07.788774 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8085860-d6b8-4018-8497-2d1b7e33eb52-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:07 crc kubenswrapper[4980]: I0107 03:35:07.788850 4980 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8085860-d6b8-4018-8497-2d1b7e33eb52-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:08 crc kubenswrapper[4980]: I0107 03:35:08.133197 4980 patch_prober.go:28] interesting pod/router-default-5444994796-fn7tq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 03:35:08 crc kubenswrapper[4980]: [-]has-synced failed: reason withheld Jan 07 03:35:08 crc kubenswrapper[4980]: [+]process-running ok Jan 07 03:35:08 crc kubenswrapper[4980]: healthz check failed Jan 07 03:35:08 crc kubenswrapper[4980]: I0107 03:35:08.133257 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fn7tq" podUID="2a6e4d95-beda-46e5-8030-8f4f590cc22e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 03:35:08 crc kubenswrapper[4980]: I0107 03:35:08.288894 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f8085860-d6b8-4018-8497-2d1b7e33eb52","Type":"ContainerDied","Data":"64ac324db45e24ce13a183daada8f0a84b1815e1fd249d93d6ff2a76c0e18ce6"} Jan 07 03:35:08 crc kubenswrapper[4980]: I0107 03:35:08.288946 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64ac324db45e24ce13a183daada8f0a84b1815e1fd249d93d6ff2a76c0e18ce6" Jan 07 03:35:08 crc kubenswrapper[4980]: I0107 03:35:08.289080 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 07 03:35:08 crc kubenswrapper[4980]: I0107 03:35:08.300677 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea443beb-5142-4aa2-aa29-dea478f60ac2" containerID="dd24719db5d92695dad9ee99b77f6c7b8630f912440b2f0b8d90a38575903aa7" exitCode=0 Jan 07 03:35:08 crc kubenswrapper[4980]: I0107 03:35:08.300721 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ea443beb-5142-4aa2-aa29-dea478f60ac2","Type":"ContainerDied","Data":"dd24719db5d92695dad9ee99b77f6c7b8630f912440b2f0b8d90a38575903aa7"} Jan 07 03:35:08 crc kubenswrapper[4980]: I0107 03:35:08.728784 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xqhnv" Jan 07 03:35:09 crc kubenswrapper[4980]: I0107 03:35:09.133696 4980 patch_prober.go:28] interesting pod/router-default-5444994796-fn7tq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 03:35:09 crc kubenswrapper[4980]: [-]has-synced failed: reason withheld Jan 07 03:35:09 crc kubenswrapper[4980]: [+]process-running ok Jan 07 03:35:09 crc kubenswrapper[4980]: healthz check failed Jan 07 03:35:09 crc kubenswrapper[4980]: I0107 03:35:09.133758 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fn7tq" podUID="2a6e4d95-beda-46e5-8030-8f4f590cc22e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 03:35:09 crc kubenswrapper[4980]: I0107 03:35:09.654003 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 07 03:35:09 crc kubenswrapper[4980]: I0107 03:35:09.731059 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea443beb-5142-4aa2-aa29-dea478f60ac2-kubelet-dir\") pod \"ea443beb-5142-4aa2-aa29-dea478f60ac2\" (UID: \"ea443beb-5142-4aa2-aa29-dea478f60ac2\") " Jan 07 03:35:09 crc kubenswrapper[4980]: I0107 03:35:09.731220 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea443beb-5142-4aa2-aa29-dea478f60ac2-kube-api-access\") pod \"ea443beb-5142-4aa2-aa29-dea478f60ac2\" (UID: \"ea443beb-5142-4aa2-aa29-dea478f60ac2\") " Jan 07 03:35:09 crc kubenswrapper[4980]: I0107 03:35:09.731253 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea443beb-5142-4aa2-aa29-dea478f60ac2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ea443beb-5142-4aa2-aa29-dea478f60ac2" (UID: "ea443beb-5142-4aa2-aa29-dea478f60ac2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:35:09 crc kubenswrapper[4980]: I0107 03:35:09.731482 4980 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea443beb-5142-4aa2-aa29-dea478f60ac2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:09 crc kubenswrapper[4980]: I0107 03:35:09.736225 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea443beb-5142-4aa2-aa29-dea478f60ac2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ea443beb-5142-4aa2-aa29-dea478f60ac2" (UID: "ea443beb-5142-4aa2-aa29-dea478f60ac2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:35:09 crc kubenswrapper[4980]: I0107 03:35:09.833789 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea443beb-5142-4aa2-aa29-dea478f60ac2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:10 crc kubenswrapper[4980]: I0107 03:35:10.134061 4980 patch_prober.go:28] interesting pod/router-default-5444994796-fn7tq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 03:35:10 crc kubenswrapper[4980]: [-]has-synced failed: reason withheld Jan 07 03:35:10 crc kubenswrapper[4980]: [+]process-running ok Jan 07 03:35:10 crc kubenswrapper[4980]: healthz check failed Jan 07 03:35:10 crc kubenswrapper[4980]: I0107 03:35:10.134794 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fn7tq" podUID="2a6e4d95-beda-46e5-8030-8f4f590cc22e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 03:35:10 crc kubenswrapper[4980]: I0107 03:35:10.330751 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ea443beb-5142-4aa2-aa29-dea478f60ac2","Type":"ContainerDied","Data":"0a23e6756ca87dd6afcd9cce065b7428d1ab64d023ba76e2c955665e067e7871"} Jan 07 03:35:10 crc kubenswrapper[4980]: I0107 03:35:10.330798 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a23e6756ca87dd6afcd9cce065b7428d1ab64d023ba76e2c955665e067e7871" Jan 07 03:35:10 crc kubenswrapper[4980]: I0107 03:35:10.330859 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 07 03:35:11 crc kubenswrapper[4980]: I0107 03:35:11.132429 4980 patch_prober.go:28] interesting pod/router-default-5444994796-fn7tq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 03:35:11 crc kubenswrapper[4980]: [-]has-synced failed: reason withheld Jan 07 03:35:11 crc kubenswrapper[4980]: [+]process-running ok Jan 07 03:35:11 crc kubenswrapper[4980]: healthz check failed Jan 07 03:35:11 crc kubenswrapper[4980]: I0107 03:35:11.132479 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fn7tq" podUID="2a6e4d95-beda-46e5-8030-8f4f590cc22e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 03:35:12 crc kubenswrapper[4980]: I0107 03:35:12.138826 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:35:12 crc kubenswrapper[4980]: I0107 03:35:12.141270 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-fn7tq" Jan 07 03:35:12 crc kubenswrapper[4980]: I0107 03:35:12.175327 4980 patch_prober.go:28] interesting pod/console-f9d7485db-46blp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 07 03:35:12 crc kubenswrapper[4980]: I0107 03:35:12.175378 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-46blp" podUID="063cfd7b-7d93-45bc-a374-99b5e204b200" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 07 03:35:12 crc kubenswrapper[4980]: I0107 03:35:12.478545 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-hn52l" Jan 07 03:35:12 crc kubenswrapper[4980]: I0107 03:35:12.731518 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:35:15 crc kubenswrapper[4980]: I0107 03:35:15.485262 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nrvkf"] Jan 07 03:35:15 crc kubenswrapper[4980]: I0107 03:35:15.485454 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" podUID="5948f721-3c3a-4f73-90f1-cb7a5d101df1" containerName="controller-manager" containerID="cri-o://6aeb9e5567aa8840b2e75157e8821da3a15d67eaceea4f0f06cbb72da8a4ccfb" gracePeriod=30 Jan 07 03:35:15 crc kubenswrapper[4980]: I0107 03:35:15.508707 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg"] Jan 07 03:35:15 crc kubenswrapper[4980]: I0107 03:35:15.509133 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" podUID="d08997fa-78f4-4c3c-a8f4-86ba967a4f35" containerName="route-controller-manager" containerID="cri-o://8a2cb4e6435170533bef663945ad6308eeef99f1a04460a38e01bce54a6365f5" gracePeriod=30 Jan 07 03:35:15 crc kubenswrapper[4980]: E0107 03:35:15.637868 4980 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5948f721_3c3a_4f73_90f1_cb7a5d101df1.slice/crio-6aeb9e5567aa8840b2e75157e8821da3a15d67eaceea4f0f06cbb72da8a4ccfb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd08997fa_78f4_4c3c_a8f4_86ba967a4f35.slice/crio-8a2cb4e6435170533bef663945ad6308eeef99f1a04460a38e01bce54a6365f5.scope\": RecentStats: unable to find data in memory cache]" Jan 07 03:35:16 crc kubenswrapper[4980]: I0107 03:35:16.139247 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs\") pod \"network-metrics-daemon-j75z7\" (UID: \"1e3c7945-f3cb-4af2-8a0f-19b014123f74\") " pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:35:16 crc kubenswrapper[4980]: I0107 03:35:16.154383 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e3c7945-f3cb-4af2-8a0f-19b014123f74-metrics-certs\") pod \"network-metrics-daemon-j75z7\" (UID: \"1e3c7945-f3cb-4af2-8a0f-19b014123f74\") " pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:35:16 crc kubenswrapper[4980]: I0107 03:35:16.274099 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j75z7" Jan 07 03:35:16 crc kubenswrapper[4980]: I0107 03:35:16.388797 4980 generic.go:334] "Generic (PLEG): container finished" podID="5948f721-3c3a-4f73-90f1-cb7a5d101df1" containerID="6aeb9e5567aa8840b2e75157e8821da3a15d67eaceea4f0f06cbb72da8a4ccfb" exitCode=0 Jan 07 03:35:16 crc kubenswrapper[4980]: I0107 03:35:16.388917 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" event={"ID":"5948f721-3c3a-4f73-90f1-cb7a5d101df1","Type":"ContainerDied","Data":"6aeb9e5567aa8840b2e75157e8821da3a15d67eaceea4f0f06cbb72da8a4ccfb"} Jan 07 03:35:16 crc kubenswrapper[4980]: I0107 03:35:16.391760 4980 generic.go:334] "Generic (PLEG): container finished" podID="d08997fa-78f4-4c3c-a8f4-86ba967a4f35" containerID="8a2cb4e6435170533bef663945ad6308eeef99f1a04460a38e01bce54a6365f5" exitCode=0 Jan 07 03:35:16 crc kubenswrapper[4980]: I0107 03:35:16.391835 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" event={"ID":"d08997fa-78f4-4c3c-a8f4-86ba967a4f35","Type":"ContainerDied","Data":"8a2cb4e6435170533bef663945ad6308eeef99f1a04460a38e01bce54a6365f5"} Jan 07 03:35:19 crc kubenswrapper[4980]: I0107 03:35:19.894955 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:35:19 crc kubenswrapper[4980]: I0107 03:35:19.944924 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bc995945c-mxmhp"] Jan 07 03:35:19 crc kubenswrapper[4980]: E0107 03:35:19.945360 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5948f721-3c3a-4f73-90f1-cb7a5d101df1" containerName="controller-manager" Jan 07 03:35:19 crc kubenswrapper[4980]: I0107 03:35:19.945395 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="5948f721-3c3a-4f73-90f1-cb7a5d101df1" containerName="controller-manager" Jan 07 03:35:19 crc kubenswrapper[4980]: E0107 03:35:19.945430 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8085860-d6b8-4018-8497-2d1b7e33eb52" containerName="pruner" Jan 07 03:35:19 crc kubenswrapper[4980]: I0107 03:35:19.945444 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8085860-d6b8-4018-8497-2d1b7e33eb52" containerName="pruner" Jan 07 03:35:19 crc kubenswrapper[4980]: E0107 03:35:19.945465 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea443beb-5142-4aa2-aa29-dea478f60ac2" containerName="pruner" Jan 07 03:35:19 crc kubenswrapper[4980]: I0107 03:35:19.945479 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea443beb-5142-4aa2-aa29-dea478f60ac2" containerName="pruner" Jan 07 03:35:19 crc kubenswrapper[4980]: I0107 03:35:19.946161 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea443beb-5142-4aa2-aa29-dea478f60ac2" containerName="pruner" Jan 07 03:35:19 crc kubenswrapper[4980]: I0107 03:35:19.946238 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="5948f721-3c3a-4f73-90f1-cb7a5d101df1" containerName="controller-manager" Jan 07 03:35:19 crc kubenswrapper[4980]: I0107 03:35:19.946323 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8085860-d6b8-4018-8497-2d1b7e33eb52" containerName="pruner" Jan 07 03:35:19 crc kubenswrapper[4980]: I0107 03:35:19.947358 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:19 crc kubenswrapper[4980]: I0107 03:35:19.958445 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bc995945c-mxmhp"] Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.016486 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5948f721-3c3a-4f73-90f1-cb7a5d101df1-serving-cert\") pod \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.017630 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5948f721-3c3a-4f73-90f1-cb7a5d101df1-client-ca\") pod \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.017965 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm592\" (UniqueName: \"kubernetes.io/projected/5948f721-3c3a-4f73-90f1-cb7a5d101df1-kube-api-access-mm592\") pod \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.018103 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5948f721-3c3a-4f73-90f1-cb7a5d101df1-proxy-ca-bundles\") pod \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.018273 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5948f721-3c3a-4f73-90f1-cb7a5d101df1-config\") pod \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\" (UID: \"5948f721-3c3a-4f73-90f1-cb7a5d101df1\") " Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.018533 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a5b9101-1923-4ff7-93ad-a50ac53f9852-client-ca\") pod \"controller-manager-bc995945c-mxmhp\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.018685 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a5b9101-1923-4ff7-93ad-a50ac53f9852-proxy-ca-bundles\") pod \"controller-manager-bc995945c-mxmhp\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.018781 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a5b9101-1923-4ff7-93ad-a50ac53f9852-config\") pod \"controller-manager-bc995945c-mxmhp\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.018959 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hps22\" (UniqueName: \"kubernetes.io/projected/3a5b9101-1923-4ff7-93ad-a50ac53f9852-kube-api-access-hps22\") pod \"controller-manager-bc995945c-mxmhp\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.019055 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a5b9101-1923-4ff7-93ad-a50ac53f9852-serving-cert\") pod \"controller-manager-bc995945c-mxmhp\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.019955 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5948f721-3c3a-4f73-90f1-cb7a5d101df1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5948f721-3c3a-4f73-90f1-cb7a5d101df1" (UID: "5948f721-3c3a-4f73-90f1-cb7a5d101df1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.020170 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5948f721-3c3a-4f73-90f1-cb7a5d101df1-config" (OuterVolumeSpecName: "config") pod "5948f721-3c3a-4f73-90f1-cb7a5d101df1" (UID: "5948f721-3c3a-4f73-90f1-cb7a5d101df1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.020823 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5948f721-3c3a-4f73-90f1-cb7a5d101df1-client-ca" (OuterVolumeSpecName: "client-ca") pod "5948f721-3c3a-4f73-90f1-cb7a5d101df1" (UID: "5948f721-3c3a-4f73-90f1-cb7a5d101df1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.024931 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5948f721-3c3a-4f73-90f1-cb7a5d101df1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5948f721-3c3a-4f73-90f1-cb7a5d101df1" (UID: "5948f721-3c3a-4f73-90f1-cb7a5d101df1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.027726 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5948f721-3c3a-4f73-90f1-cb7a5d101df1-kube-api-access-mm592" (OuterVolumeSpecName: "kube-api-access-mm592") pod "5948f721-3c3a-4f73-90f1-cb7a5d101df1" (UID: "5948f721-3c3a-4f73-90f1-cb7a5d101df1"). InnerVolumeSpecName "kube-api-access-mm592". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.120903 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hps22\" (UniqueName: \"kubernetes.io/projected/3a5b9101-1923-4ff7-93ad-a50ac53f9852-kube-api-access-hps22\") pod \"controller-manager-bc995945c-mxmhp\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.121026 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a5b9101-1923-4ff7-93ad-a50ac53f9852-serving-cert\") pod \"controller-manager-bc995945c-mxmhp\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.121086 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a5b9101-1923-4ff7-93ad-a50ac53f9852-client-ca\") pod \"controller-manager-bc995945c-mxmhp\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.121135 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a5b9101-1923-4ff7-93ad-a50ac53f9852-proxy-ca-bundles\") pod \"controller-manager-bc995945c-mxmhp\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.121172 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a5b9101-1923-4ff7-93ad-a50ac53f9852-config\") pod \"controller-manager-bc995945c-mxmhp\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.121298 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5948f721-3c3a-4f73-90f1-cb7a5d101df1-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.121321 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5948f721-3c3a-4f73-90f1-cb7a5d101df1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.121341 4980 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5948f721-3c3a-4f73-90f1-cb7a5d101df1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.121359 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm592\" (UniqueName: \"kubernetes.io/projected/5948f721-3c3a-4f73-90f1-cb7a5d101df1-kube-api-access-mm592\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.121379 4980 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5948f721-3c3a-4f73-90f1-cb7a5d101df1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.123237 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a5b9101-1923-4ff7-93ad-a50ac53f9852-proxy-ca-bundles\") pod \"controller-manager-bc995945c-mxmhp\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.124366 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a5b9101-1923-4ff7-93ad-a50ac53f9852-config\") pod \"controller-manager-bc995945c-mxmhp\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.127096 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a5b9101-1923-4ff7-93ad-a50ac53f9852-serving-cert\") pod \"controller-manager-bc995945c-mxmhp\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.143987 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hps22\" (UniqueName: \"kubernetes.io/projected/3a5b9101-1923-4ff7-93ad-a50ac53f9852-kube-api-access-hps22\") pod \"controller-manager-bc995945c-mxmhp\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.173379 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a5b9101-1923-4ff7-93ad-a50ac53f9852-client-ca\") pod \"controller-manager-bc995945c-mxmhp\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.280485 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.416737 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" event={"ID":"5948f721-3c3a-4f73-90f1-cb7a5d101df1","Type":"ContainerDied","Data":"4d2c6315f2838c988b7cb9414318f7ace1b5703fafa58bd55ac47a0f3cd66856"} Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.416791 4980 scope.go:117] "RemoveContainer" containerID="6aeb9e5567aa8840b2e75157e8821da3a15d67eaceea4f0f06cbb72da8a4ccfb" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.416866 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nrvkf" Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.467733 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nrvkf"] Jan 07 03:35:20 crc kubenswrapper[4980]: I0107 03:35:20.476099 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nrvkf"] Jan 07 03:35:21 crc kubenswrapper[4980]: I0107 03:35:21.012014 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:35:21 crc kubenswrapper[4980]: I0107 03:35:21.742793 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5948f721-3c3a-4f73-90f1-cb7a5d101df1" path="/var/lib/kubelet/pods/5948f721-3c3a-4f73-90f1-cb7a5d101df1/volumes" Jan 07 03:35:22 crc kubenswrapper[4980]: I0107 03:35:22.179526 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:35:22 crc kubenswrapper[4980]: I0107 03:35:22.186781 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:35:23 crc kubenswrapper[4980]: I0107 03:35:23.197571 4980 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7rdqg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 07 03:35:23 crc kubenswrapper[4980]: I0107 03:35:23.197635 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" podUID="d08997fa-78f4-4c3c-a8f4-86ba967a4f35" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 07 03:35:25 crc kubenswrapper[4980]: I0107 03:35:25.997141 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.032418 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-config\") pod \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\" (UID: \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\") " Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.032503 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7nf4\" (UniqueName: \"kubernetes.io/projected/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-kube-api-access-d7nf4\") pod \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\" (UID: \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\") " Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.032570 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-client-ca\") pod \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\" (UID: \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\") " Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.032587 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-serving-cert\") pod \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\" (UID: \"d08997fa-78f4-4c3c-a8f4-86ba967a4f35\") " Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.033474 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-config" (OuterVolumeSpecName: "config") pod "d08997fa-78f4-4c3c-a8f4-86ba967a4f35" (UID: "d08997fa-78f4-4c3c-a8f4-86ba967a4f35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.034011 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-client-ca" (OuterVolumeSpecName: "client-ca") pod "d08997fa-78f4-4c3c-a8f4-86ba967a4f35" (UID: "d08997fa-78f4-4c3c-a8f4-86ba967a4f35"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.036744 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7"] Jan 07 03:35:26 crc kubenswrapper[4980]: E0107 03:35:26.037243 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d08997fa-78f4-4c3c-a8f4-86ba967a4f35" containerName="route-controller-manager" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.037271 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="d08997fa-78f4-4c3c-a8f4-86ba967a4f35" containerName="route-controller-manager" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.037432 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="d08997fa-78f4-4c3c-a8f4-86ba967a4f35" containerName="route-controller-manager" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.038162 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7"] Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.038229 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.053916 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-kube-api-access-d7nf4" (OuterVolumeSpecName: "kube-api-access-d7nf4") pod "d08997fa-78f4-4c3c-a8f4-86ba967a4f35" (UID: "d08997fa-78f4-4c3c-a8f4-86ba967a4f35"). InnerVolumeSpecName "kube-api-access-d7nf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.056203 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d08997fa-78f4-4c3c-a8f4-86ba967a4f35" (UID: "d08997fa-78f4-4c3c-a8f4-86ba967a4f35"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.134137 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv5sc\" (UniqueName: \"kubernetes.io/projected/c24e59ff-fd92-421a-807f-adbacd7287ae-kube-api-access-sv5sc\") pod \"route-controller-manager-6ffb4b647f-cnnt7\" (UID: \"c24e59ff-fd92-421a-807f-adbacd7287ae\") " pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.134526 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c24e59ff-fd92-421a-807f-adbacd7287ae-client-ca\") pod \"route-controller-manager-6ffb4b647f-cnnt7\" (UID: \"c24e59ff-fd92-421a-807f-adbacd7287ae\") " pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.134570 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c24e59ff-fd92-421a-807f-adbacd7287ae-config\") pod \"route-controller-manager-6ffb4b647f-cnnt7\" (UID: \"c24e59ff-fd92-421a-807f-adbacd7287ae\") " pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.134602 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c24e59ff-fd92-421a-807f-adbacd7287ae-serving-cert\") pod \"route-controller-manager-6ffb4b647f-cnnt7\" (UID: \"c24e59ff-fd92-421a-807f-adbacd7287ae\") " pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.134672 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.134835 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7nf4\" (UniqueName: \"kubernetes.io/projected/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-kube-api-access-d7nf4\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.134900 4980 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-client-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.134917 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d08997fa-78f4-4c3c-a8f4-86ba967a4f35-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.236431 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c24e59ff-fd92-421a-807f-adbacd7287ae-config\") pod \"route-controller-manager-6ffb4b647f-cnnt7\" (UID: \"c24e59ff-fd92-421a-807f-adbacd7287ae\") " pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.236527 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c24e59ff-fd92-421a-807f-adbacd7287ae-serving-cert\") pod \"route-controller-manager-6ffb4b647f-cnnt7\" (UID: \"c24e59ff-fd92-421a-807f-adbacd7287ae\") " pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.236622 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv5sc\" (UniqueName: \"kubernetes.io/projected/c24e59ff-fd92-421a-807f-adbacd7287ae-kube-api-access-sv5sc\") pod \"route-controller-manager-6ffb4b647f-cnnt7\" (UID: \"c24e59ff-fd92-421a-807f-adbacd7287ae\") " pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.236696 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c24e59ff-fd92-421a-807f-adbacd7287ae-client-ca\") pod \"route-controller-manager-6ffb4b647f-cnnt7\" (UID: \"c24e59ff-fd92-421a-807f-adbacd7287ae\") " pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.238129 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c24e59ff-fd92-421a-807f-adbacd7287ae-client-ca\") pod \"route-controller-manager-6ffb4b647f-cnnt7\" (UID: \"c24e59ff-fd92-421a-807f-adbacd7287ae\") " pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.238849 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c24e59ff-fd92-421a-807f-adbacd7287ae-config\") pod \"route-controller-manager-6ffb4b647f-cnnt7\" (UID: \"c24e59ff-fd92-421a-807f-adbacd7287ae\") " pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.241941 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c24e59ff-fd92-421a-807f-adbacd7287ae-serving-cert\") pod \"route-controller-manager-6ffb4b647f-cnnt7\" (UID: \"c24e59ff-fd92-421a-807f-adbacd7287ae\") " pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.259895 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv5sc\" (UniqueName: \"kubernetes.io/projected/c24e59ff-fd92-421a-807f-adbacd7287ae-kube-api-access-sv5sc\") pod \"route-controller-manager-6ffb4b647f-cnnt7\" (UID: \"c24e59ff-fd92-421a-807f-adbacd7287ae\") " pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.375321 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.455700 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" event={"ID":"d08997fa-78f4-4c3c-a8f4-86ba967a4f35","Type":"ContainerDied","Data":"cc9baa2683283ad4a43b836d3be83b46b7f9f6a51ddd84b0eab896f7b5929379"} Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.455766 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg" Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.491458 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg"] Jan 07 03:35:26 crc kubenswrapper[4980]: I0107 03:35:26.496953 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7rdqg"] Jan 07 03:35:27 crc kubenswrapper[4980]: I0107 03:35:27.750086 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d08997fa-78f4-4c3c-a8f4-86ba967a4f35" path="/var/lib/kubelet/pods/d08997fa-78f4-4c3c-a8f4-86ba967a4f35/volumes" Jan 07 03:35:33 crc kubenswrapper[4980]: I0107 03:35:33.582838 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gx94v" Jan 07 03:35:35 crc kubenswrapper[4980]: I0107 03:35:35.444079 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bc995945c-mxmhp"] Jan 07 03:35:35 crc kubenswrapper[4980]: I0107 03:35:35.536700 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7"] Jan 07 03:35:35 crc kubenswrapper[4980]: I0107 03:35:35.829236 4980 scope.go:117] "RemoveContainer" containerID="8a2cb4e6435170533bef663945ad6308eeef99f1a04460a38e01bce54a6365f5" Jan 07 03:35:35 crc kubenswrapper[4980]: E0107 03:35:35.829998 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 07 03:35:35 crc kubenswrapper[4980]: E0107 03:35:35.830266 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c72lr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-42gz5_openshift-marketplace(52350cdf-a327-410e-89cf-6666175b6ddc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 07 03:35:35 crc kubenswrapper[4980]: E0107 03:35:35.831389 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-42gz5" podUID="52350cdf-a327-410e-89cf-6666175b6ddc" Jan 07 03:35:35 crc kubenswrapper[4980]: E0107 03:35:35.952911 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 07 03:35:35 crc kubenswrapper[4980]: E0107 03:35:35.953335 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnp6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2bfnr_openshift-marketplace(7749febc-7b8f-4a6b-96e8-2579f281cede): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 07 03:35:35 crc kubenswrapper[4980]: E0107 03:35:35.955125 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2bfnr" podUID="7749febc-7b8f-4a6b-96e8-2579f281cede" Jan 07 03:35:36 crc kubenswrapper[4980]: E0107 03:35:36.041761 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 07 03:35:36 crc kubenswrapper[4980]: E0107 03:35:36.042215 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfql8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-9c5kx_openshift-marketplace(16999f3a-e9cd-449c-bb6f-72b7759cb32e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 07 03:35:36 crc kubenswrapper[4980]: E0107 03:35:36.043517 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-9c5kx" podUID="16999f3a-e9cd-449c-bb6f-72b7759cb32e" Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.135000 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bc995945c-mxmhp"] Jan 07 03:35:36 crc kubenswrapper[4980]: W0107 03:35:36.179863 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a5b9101_1923_4ff7_93ad_a50ac53f9852.slice/crio-9b2eed40c2479895e7abc4b03960e1a1f5c1ea75e6eaa748e6d8a904fe68de10 WatchSource:0}: Error finding container 9b2eed40c2479895e7abc4b03960e1a1f5c1ea75e6eaa748e6d8a904fe68de10: Status 404 returned error can't find the container with id 9b2eed40c2479895e7abc4b03960e1a1f5c1ea75e6eaa748e6d8a904fe68de10 Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.201703 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7"] Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.271250 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-j75z7"] Jan 07 03:35:36 crc kubenswrapper[4980]: W0107 03:35:36.317520 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e3c7945_f3cb_4af2_8a0f_19b014123f74.slice/crio-70dda320e76bf05b9b39d0a1757a3b486e23a414b90b57169bb2f71205c4ddaf WatchSource:0}: Error finding container 70dda320e76bf05b9b39d0a1757a3b486e23a414b90b57169bb2f71205c4ddaf: Status 404 returned error can't find the container with id 70dda320e76bf05b9b39d0a1757a3b486e23a414b90b57169bb2f71205c4ddaf Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.524480 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56mp9" event={"ID":"46100c6d-7c73-47b9-a30e-6409aabaf9db","Type":"ContainerStarted","Data":"b0f3eb6126d4d0dfa937a45a70ce347cc1818a246db9077545188be1a27a393e"} Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.527235 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2mq9" event={"ID":"de1dad58-7ec1-4867-8494-044120bf894b","Type":"ContainerStarted","Data":"1e184fe0692ba0ed84217fad990ab5f64066df8e968fc5ce8a0ca1e6c125b62b"} Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.529346 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" event={"ID":"3a5b9101-1923-4ff7-93ad-a50ac53f9852","Type":"ContainerStarted","Data":"678a90bb532fee27a5294229f29d21969370c93c8c6ac2abcc5c9c0ff378ddef"} Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.529371 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" event={"ID":"3a5b9101-1923-4ff7-93ad-a50ac53f9852","Type":"ContainerStarted","Data":"9b2eed40c2479895e7abc4b03960e1a1f5c1ea75e6eaa748e6d8a904fe68de10"} Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.529457 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" podUID="3a5b9101-1923-4ff7-93ad-a50ac53f9852" containerName="controller-manager" containerID="cri-o://678a90bb532fee27a5294229f29d21969370c93c8c6ac2abcc5c9c0ff378ddef" gracePeriod=30 Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.529913 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.535521 4980 generic.go:334] "Generic (PLEG): container finished" podID="fd9447f0-10ac-4740-a5a6-618b454b89be" containerID="87fbbfa7331c44a42fd9933cb012e027e8e1ad4262cade6a0a9575d627a3a720" exitCode=0 Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.535597 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdtx8" event={"ID":"fd9447f0-10ac-4740-a5a6-618b454b89be","Type":"ContainerDied","Data":"87fbbfa7331c44a42fd9933cb012e027e8e1ad4262cade6a0a9575d627a3a720"} Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.541409 4980 generic.go:334] "Generic (PLEG): container finished" podID="9cad7956-ecee-468f-9fbe-b8a99a646cfb" containerID="2663090ede279f7279e2859cd39198e0d056f748e021bd861f888e9d2b0dcf0c" exitCode=0 Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.541575 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dphcj" event={"ID":"9cad7956-ecee-468f-9fbe-b8a99a646cfb","Type":"ContainerDied","Data":"2663090ede279f7279e2859cd39198e0d056f748e021bd861f888e9d2b0dcf0c"} Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.542744 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.542785 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.544057 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j75z7" event={"ID":"1e3c7945-f3cb-4af2-8a0f-19b014123f74","Type":"ContainerStarted","Data":"70dda320e76bf05b9b39d0a1757a3b486e23a414b90b57169bb2f71205c4ddaf"} Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.547679 4980 generic.go:334] "Generic (PLEG): container finished" podID="83605c82-2947-4e84-8657-e9d040571dae" containerID="c223c25a607cb9ae6e6bdc96d08e812b847b20a0481954ab483a35b47df23a88" exitCode=0 Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.547734 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-slxp5" event={"ID":"83605c82-2947-4e84-8657-e9d040571dae","Type":"ContainerDied","Data":"c223c25a607cb9ae6e6bdc96d08e812b847b20a0481954ab483a35b47df23a88"} Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.552012 4980 patch_prober.go:28] interesting pod/controller-manager-bc995945c-mxmhp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": EOF" start-of-body= Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.552046 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" podUID="3a5b9101-1923-4ff7-93ad-a50ac53f9852" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": EOF" Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.560275 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" event={"ID":"c24e59ff-fd92-421a-807f-adbacd7287ae","Type":"ContainerStarted","Data":"dbd35d6e38e89781e3e1b6d1cbb5f2d850340d2bb46381222fee4eba1f04695d"} Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.560326 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" event={"ID":"c24e59ff-fd92-421a-807f-adbacd7287ae","Type":"ContainerStarted","Data":"4e709e600cbc2e487ccd25289d9fb955313cce422e386dd2930da2ad8523b47e"} Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.560365 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" podUID="c24e59ff-fd92-421a-807f-adbacd7287ae" containerName="route-controller-manager" containerID="cri-o://dbd35d6e38e89781e3e1b6d1cbb5f2d850340d2bb46381222fee4eba1f04695d" gracePeriod=30 Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.560494 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" Jan 07 03:35:36 crc kubenswrapper[4980]: E0107 03:35:36.565401 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2bfnr" podUID="7749febc-7b8f-4a6b-96e8-2579f281cede" Jan 07 03:35:36 crc kubenswrapper[4980]: E0107 03:35:36.565472 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-42gz5" podUID="52350cdf-a327-410e-89cf-6666175b6ddc" Jan 07 03:35:36 crc kubenswrapper[4980]: E0107 03:35:36.568462 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-9c5kx" podUID="16999f3a-e9cd-449c-bb6f-72b7759cb32e" Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.599593 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" podStartSLOduration=21.59957373 podStartE2EDuration="21.59957373s" podCreationTimestamp="2026-01-07 03:35:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:35:36.598803436 +0000 UTC m=+183.164498171" watchObservedRunningTime="2026-01-07 03:35:36.59957373 +0000 UTC m=+183.165268465" Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.703744 4980 patch_prober.go:28] interesting pod/route-controller-manager-6ffb4b647f-cnnt7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:47422->10.217.0.55:8443: read: connection reset by peer" start-of-body= Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.703806 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" podUID="c24e59ff-fd92-421a-807f-adbacd7287ae" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:47422->10.217.0.55:8443: read: connection reset by peer" Jan 07 03:35:36 crc kubenswrapper[4980]: I0107 03:35:36.719362 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" podStartSLOduration=21.719346843 podStartE2EDuration="21.719346843s" podCreationTimestamp="2026-01-07 03:35:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:35:36.716439194 +0000 UTC m=+183.282133929" watchObservedRunningTime="2026-01-07 03:35:36.719346843 +0000 UTC m=+183.285041568" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.095063 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6ffb4b647f-cnnt7_c24e59ff-fd92-421a-807f-adbacd7287ae/route-controller-manager/0.log" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.095567 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.102490 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.136941 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k"] Jan 07 03:35:37 crc kubenswrapper[4980]: E0107 03:35:37.138066 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c24e59ff-fd92-421a-807f-adbacd7287ae" containerName="route-controller-manager" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.138087 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c24e59ff-fd92-421a-807f-adbacd7287ae" containerName="route-controller-manager" Jan 07 03:35:37 crc kubenswrapper[4980]: E0107 03:35:37.138102 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5b9101-1923-4ff7-93ad-a50ac53f9852" containerName="controller-manager" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.138109 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5b9101-1923-4ff7-93ad-a50ac53f9852" containerName="controller-manager" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.138204 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c24e59ff-fd92-421a-807f-adbacd7287ae" containerName="route-controller-manager" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.138218 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a5b9101-1923-4ff7-93ad-a50ac53f9852" containerName="controller-manager" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.138963 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.160100 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k"] Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.195312 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c24e59ff-fd92-421a-807f-adbacd7287ae-client-ca\") pod \"c24e59ff-fd92-421a-807f-adbacd7287ae\" (UID: \"c24e59ff-fd92-421a-807f-adbacd7287ae\") " Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.195373 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c24e59ff-fd92-421a-807f-adbacd7287ae-serving-cert\") pod \"c24e59ff-fd92-421a-807f-adbacd7287ae\" (UID: \"c24e59ff-fd92-421a-807f-adbacd7287ae\") " Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.195435 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c24e59ff-fd92-421a-807f-adbacd7287ae-config\") pod \"c24e59ff-fd92-421a-807f-adbacd7287ae\" (UID: \"c24e59ff-fd92-421a-807f-adbacd7287ae\") " Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.195470 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hps22\" (UniqueName: \"kubernetes.io/projected/3a5b9101-1923-4ff7-93ad-a50ac53f9852-kube-api-access-hps22\") pod \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.195522 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a5b9101-1923-4ff7-93ad-a50ac53f9852-serving-cert\") pod \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.195543 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a5b9101-1923-4ff7-93ad-a50ac53f9852-client-ca\") pod \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.195600 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a5b9101-1923-4ff7-93ad-a50ac53f9852-config\") pod \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.195636 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a5b9101-1923-4ff7-93ad-a50ac53f9852-proxy-ca-bundles\") pod \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\" (UID: \"3a5b9101-1923-4ff7-93ad-a50ac53f9852\") " Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.195674 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv5sc\" (UniqueName: \"kubernetes.io/projected/c24e59ff-fd92-421a-807f-adbacd7287ae-kube-api-access-sv5sc\") pod \"c24e59ff-fd92-421a-807f-adbacd7287ae\" (UID: \"c24e59ff-fd92-421a-807f-adbacd7287ae\") " Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.195953 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe73edac-2403-4210-b1ac-3ac701b91f37-serving-cert\") pod \"route-controller-manager-57bf9bb4d8-vzs7k\" (UID: \"fe73edac-2403-4210-b1ac-3ac701b91f37\") " pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.195995 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpmq4\" (UniqueName: \"kubernetes.io/projected/fe73edac-2403-4210-b1ac-3ac701b91f37-kube-api-access-hpmq4\") pod \"route-controller-manager-57bf9bb4d8-vzs7k\" (UID: \"fe73edac-2403-4210-b1ac-3ac701b91f37\") " pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.196017 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe73edac-2403-4210-b1ac-3ac701b91f37-client-ca\") pod \"route-controller-manager-57bf9bb4d8-vzs7k\" (UID: \"fe73edac-2403-4210-b1ac-3ac701b91f37\") " pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.196081 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe73edac-2403-4210-b1ac-3ac701b91f37-config\") pod \"route-controller-manager-57bf9bb4d8-vzs7k\" (UID: \"fe73edac-2403-4210-b1ac-3ac701b91f37\") " pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.196396 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c24e59ff-fd92-421a-807f-adbacd7287ae-client-ca" (OuterVolumeSpecName: "client-ca") pod "c24e59ff-fd92-421a-807f-adbacd7287ae" (UID: "c24e59ff-fd92-421a-807f-adbacd7287ae"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.196729 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a5b9101-1923-4ff7-93ad-a50ac53f9852-client-ca" (OuterVolumeSpecName: "client-ca") pod "3a5b9101-1923-4ff7-93ad-a50ac53f9852" (UID: "3a5b9101-1923-4ff7-93ad-a50ac53f9852"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.197011 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a5b9101-1923-4ff7-93ad-a50ac53f9852-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3a5b9101-1923-4ff7-93ad-a50ac53f9852" (UID: "3a5b9101-1923-4ff7-93ad-a50ac53f9852"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.197269 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a5b9101-1923-4ff7-93ad-a50ac53f9852-config" (OuterVolumeSpecName: "config") pod "3a5b9101-1923-4ff7-93ad-a50ac53f9852" (UID: "3a5b9101-1923-4ff7-93ad-a50ac53f9852"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.197496 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c24e59ff-fd92-421a-807f-adbacd7287ae-config" (OuterVolumeSpecName: "config") pod "c24e59ff-fd92-421a-807f-adbacd7287ae" (UID: "c24e59ff-fd92-421a-807f-adbacd7287ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.203163 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a5b9101-1923-4ff7-93ad-a50ac53f9852-kube-api-access-hps22" (OuterVolumeSpecName: "kube-api-access-hps22") pod "3a5b9101-1923-4ff7-93ad-a50ac53f9852" (UID: "3a5b9101-1923-4ff7-93ad-a50ac53f9852"). InnerVolumeSpecName "kube-api-access-hps22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.203336 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5b9101-1923-4ff7-93ad-a50ac53f9852-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3a5b9101-1923-4ff7-93ad-a50ac53f9852" (UID: "3a5b9101-1923-4ff7-93ad-a50ac53f9852"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.203850 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c24e59ff-fd92-421a-807f-adbacd7287ae-kube-api-access-sv5sc" (OuterVolumeSpecName: "kube-api-access-sv5sc") pod "c24e59ff-fd92-421a-807f-adbacd7287ae" (UID: "c24e59ff-fd92-421a-807f-adbacd7287ae"). InnerVolumeSpecName "kube-api-access-sv5sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.204621 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c24e59ff-fd92-421a-807f-adbacd7287ae-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c24e59ff-fd92-421a-807f-adbacd7287ae" (UID: "c24e59ff-fd92-421a-807f-adbacd7287ae"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.297316 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe73edac-2403-4210-b1ac-3ac701b91f37-config\") pod \"route-controller-manager-57bf9bb4d8-vzs7k\" (UID: \"fe73edac-2403-4210-b1ac-3ac701b91f37\") " pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.297412 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe73edac-2403-4210-b1ac-3ac701b91f37-serving-cert\") pod \"route-controller-manager-57bf9bb4d8-vzs7k\" (UID: \"fe73edac-2403-4210-b1ac-3ac701b91f37\") " pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.297448 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpmq4\" (UniqueName: \"kubernetes.io/projected/fe73edac-2403-4210-b1ac-3ac701b91f37-kube-api-access-hpmq4\") pod \"route-controller-manager-57bf9bb4d8-vzs7k\" (UID: \"fe73edac-2403-4210-b1ac-3ac701b91f37\") " pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.297466 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe73edac-2403-4210-b1ac-3ac701b91f37-client-ca\") pod \"route-controller-manager-57bf9bb4d8-vzs7k\" (UID: \"fe73edac-2403-4210-b1ac-3ac701b91f37\") " pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.297516 4980 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c24e59ff-fd92-421a-807f-adbacd7287ae-client-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.297527 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c24e59ff-fd92-421a-807f-adbacd7287ae-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.297537 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c24e59ff-fd92-421a-807f-adbacd7287ae-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.297546 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hps22\" (UniqueName: \"kubernetes.io/projected/3a5b9101-1923-4ff7-93ad-a50ac53f9852-kube-api-access-hps22\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.297608 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a5b9101-1923-4ff7-93ad-a50ac53f9852-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.297617 4980 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a5b9101-1923-4ff7-93ad-a50ac53f9852-client-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.297650 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a5b9101-1923-4ff7-93ad-a50ac53f9852-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.297659 4980 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a5b9101-1923-4ff7-93ad-a50ac53f9852-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.297668 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv5sc\" (UniqueName: \"kubernetes.io/projected/c24e59ff-fd92-421a-807f-adbacd7287ae-kube-api-access-sv5sc\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.298489 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe73edac-2403-4210-b1ac-3ac701b91f37-client-ca\") pod \"route-controller-manager-57bf9bb4d8-vzs7k\" (UID: \"fe73edac-2403-4210-b1ac-3ac701b91f37\") " pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.299206 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe73edac-2403-4210-b1ac-3ac701b91f37-config\") pod \"route-controller-manager-57bf9bb4d8-vzs7k\" (UID: \"fe73edac-2403-4210-b1ac-3ac701b91f37\") " pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.304517 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe73edac-2403-4210-b1ac-3ac701b91f37-serving-cert\") pod \"route-controller-manager-57bf9bb4d8-vzs7k\" (UID: \"fe73edac-2403-4210-b1ac-3ac701b91f37\") " pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.314235 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpmq4\" (UniqueName: \"kubernetes.io/projected/fe73edac-2403-4210-b1ac-3ac701b91f37-kube-api-access-hpmq4\") pod \"route-controller-manager-57bf9bb4d8-vzs7k\" (UID: \"fe73edac-2403-4210-b1ac-3ac701b91f37\") " pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.483140 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.578041 4980 generic.go:334] "Generic (PLEG): container finished" podID="46100c6d-7c73-47b9-a30e-6409aabaf9db" containerID="b0f3eb6126d4d0dfa937a45a70ce347cc1818a246db9077545188be1a27a393e" exitCode=0 Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.578845 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56mp9" event={"ID":"46100c6d-7c73-47b9-a30e-6409aabaf9db","Type":"ContainerDied","Data":"b0f3eb6126d4d0dfa937a45a70ce347cc1818a246db9077545188be1a27a393e"} Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.588937 4980 generic.go:334] "Generic (PLEG): container finished" podID="de1dad58-7ec1-4867-8494-044120bf894b" containerID="1e184fe0692ba0ed84217fad990ab5f64066df8e968fc5ce8a0ca1e6c125b62b" exitCode=0 Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.589025 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2mq9" event={"ID":"de1dad58-7ec1-4867-8494-044120bf894b","Type":"ContainerDied","Data":"1e184fe0692ba0ed84217fad990ab5f64066df8e968fc5ce8a0ca1e6c125b62b"} Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.593438 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6ffb4b647f-cnnt7_c24e59ff-fd92-421a-807f-adbacd7287ae/route-controller-manager/0.log" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.593484 4980 generic.go:334] "Generic (PLEG): container finished" podID="c24e59ff-fd92-421a-807f-adbacd7287ae" containerID="dbd35d6e38e89781e3e1b6d1cbb5f2d850340d2bb46381222fee4eba1f04695d" exitCode=255 Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.593594 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" event={"ID":"c24e59ff-fd92-421a-807f-adbacd7287ae","Type":"ContainerDied","Data":"dbd35d6e38e89781e3e1b6d1cbb5f2d850340d2bb46381222fee4eba1f04695d"} Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.593650 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" event={"ID":"c24e59ff-fd92-421a-807f-adbacd7287ae","Type":"ContainerDied","Data":"4e709e600cbc2e487ccd25289d9fb955313cce422e386dd2930da2ad8523b47e"} Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.593671 4980 scope.go:117] "RemoveContainer" containerID="dbd35d6e38e89781e3e1b6d1cbb5f2d850340d2bb46381222fee4eba1f04695d" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.593686 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.598814 4980 generic.go:334] "Generic (PLEG): container finished" podID="3a5b9101-1923-4ff7-93ad-a50ac53f9852" containerID="678a90bb532fee27a5294229f29d21969370c93c8c6ac2abcc5c9c0ff378ddef" exitCode=0 Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.598856 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" event={"ID":"3a5b9101-1923-4ff7-93ad-a50ac53f9852","Type":"ContainerDied","Data":"678a90bb532fee27a5294229f29d21969370c93c8c6ac2abcc5c9c0ff378ddef"} Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.598873 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" event={"ID":"3a5b9101-1923-4ff7-93ad-a50ac53f9852","Type":"ContainerDied","Data":"9b2eed40c2479895e7abc4b03960e1a1f5c1ea75e6eaa748e6d8a904fe68de10"} Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.598920 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bc995945c-mxmhp" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.615810 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j75z7" event={"ID":"1e3c7945-f3cb-4af2-8a0f-19b014123f74","Type":"ContainerStarted","Data":"f5a7d0263c94e9e3d7dcf363c05040a8ce6ddf5328e419d04c58726ca98390d3"} Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.621602 4980 scope.go:117] "RemoveContainer" containerID="dbd35d6e38e89781e3e1b6d1cbb5f2d850340d2bb46381222fee4eba1f04695d" Jan 07 03:35:37 crc kubenswrapper[4980]: E0107 03:35:37.622060 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbd35d6e38e89781e3e1b6d1cbb5f2d850340d2bb46381222fee4eba1f04695d\": container with ID starting with dbd35d6e38e89781e3e1b6d1cbb5f2d850340d2bb46381222fee4eba1f04695d not found: ID does not exist" containerID="dbd35d6e38e89781e3e1b6d1cbb5f2d850340d2bb46381222fee4eba1f04695d" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.622097 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd35d6e38e89781e3e1b6d1cbb5f2d850340d2bb46381222fee4eba1f04695d"} err="failed to get container status \"dbd35d6e38e89781e3e1b6d1cbb5f2d850340d2bb46381222fee4eba1f04695d\": rpc error: code = NotFound desc = could not find container \"dbd35d6e38e89781e3e1b6d1cbb5f2d850340d2bb46381222fee4eba1f04695d\": container with ID starting with dbd35d6e38e89781e3e1b6d1cbb5f2d850340d2bb46381222fee4eba1f04695d not found: ID does not exist" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.622140 4980 scope.go:117] "RemoveContainer" containerID="678a90bb532fee27a5294229f29d21969370c93c8c6ac2abcc5c9c0ff378ddef" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.647024 4980 scope.go:117] "RemoveContainer" containerID="678a90bb532fee27a5294229f29d21969370c93c8c6ac2abcc5c9c0ff378ddef" Jan 07 03:35:37 crc kubenswrapper[4980]: E0107 03:35:37.650160 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"678a90bb532fee27a5294229f29d21969370c93c8c6ac2abcc5c9c0ff378ddef\": container with ID starting with 678a90bb532fee27a5294229f29d21969370c93c8c6ac2abcc5c9c0ff378ddef not found: ID does not exist" containerID="678a90bb532fee27a5294229f29d21969370c93c8c6ac2abcc5c9c0ff378ddef" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.650341 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"678a90bb532fee27a5294229f29d21969370c93c8c6ac2abcc5c9c0ff378ddef"} err="failed to get container status \"678a90bb532fee27a5294229f29d21969370c93c8c6ac2abcc5c9c0ff378ddef\": rpc error: code = NotFound desc = could not find container \"678a90bb532fee27a5294229f29d21969370c93c8c6ac2abcc5c9c0ff378ddef\": container with ID starting with 678a90bb532fee27a5294229f29d21969370c93c8c6ac2abcc5c9c0ff378ddef not found: ID does not exist" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.655712 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7"] Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.657946 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ffb4b647f-cnnt7"] Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.665292 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bc995945c-mxmhp"] Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.669483 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bc995945c-mxmhp"] Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.678912 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k"] Jan 07 03:35:37 crc kubenswrapper[4980]: W0107 03:35:37.684468 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe73edac_2403_4210_b1ac_3ac701b91f37.slice/crio-9ea197a662057d058c30a936e1f67a542c1571d615f49abd3c4c35a4ec4995d3 WatchSource:0}: Error finding container 9ea197a662057d058c30a936e1f67a542c1571d615f49abd3c4c35a4ec4995d3: Status 404 returned error can't find the container with id 9ea197a662057d058c30a936e1f67a542c1571d615f49abd3c4c35a4ec4995d3 Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.747979 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a5b9101-1923-4ff7-93ad-a50ac53f9852" path="/var/lib/kubelet/pods/3a5b9101-1923-4ff7-93ad-a50ac53f9852/volumes" Jan 07 03:35:37 crc kubenswrapper[4980]: I0107 03:35:37.748505 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c24e59ff-fd92-421a-807f-adbacd7287ae" path="/var/lib/kubelet/pods/c24e59ff-fd92-421a-807f-adbacd7287ae/volumes" Jan 07 03:35:38 crc kubenswrapper[4980]: I0107 03:35:38.624654 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j75z7" event={"ID":"1e3c7945-f3cb-4af2-8a0f-19b014123f74","Type":"ContainerStarted","Data":"c8ce4283a61700c12559909415d91a259e19ca2118f5e591f92037f2102ea559"} Jan 07 03:35:38 crc kubenswrapper[4980]: I0107 03:35:38.627425 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" event={"ID":"fe73edac-2403-4210-b1ac-3ac701b91f37","Type":"ContainerStarted","Data":"68af06f50a0830afb0156f8b502baf93162d31632625b0eb9b2105d609912893"} Jan 07 03:35:38 crc kubenswrapper[4980]: I0107 03:35:38.627480 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" event={"ID":"fe73edac-2403-4210-b1ac-3ac701b91f37","Type":"ContainerStarted","Data":"9ea197a662057d058c30a936e1f67a542c1571d615f49abd3c4c35a4ec4995d3"} Jan 07 03:35:38 crc kubenswrapper[4980]: I0107 03:35:38.627510 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" Jan 07 03:35:38 crc kubenswrapper[4980]: I0107 03:35:38.637116 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" Jan 07 03:35:38 crc kubenswrapper[4980]: I0107 03:35:38.649822 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-j75z7" podStartSLOduration=165.649799768 podStartE2EDuration="2m45.649799768s" podCreationTimestamp="2026-01-07 03:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:35:38.646696843 +0000 UTC m=+185.212391578" watchObservedRunningTime="2026-01-07 03:35:38.649799768 +0000 UTC m=+185.215494503" Jan 07 03:35:38 crc kubenswrapper[4980]: I0107 03:35:38.677196 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" podStartSLOduration=3.677170206 podStartE2EDuration="3.677170206s" podCreationTimestamp="2026-01-07 03:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:35:38.673369069 +0000 UTC m=+185.239063804" watchObservedRunningTime="2026-01-07 03:35:38.677170206 +0000 UTC m=+185.242864941" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.417068 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh"] Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.418373 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.420180 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.420614 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.420979 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.421322 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.422407 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.422626 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.426245 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh"] Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.436979 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.494608 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d1703ff-c25a-490f-817e-025071475fb7-config\") pod \"controller-manager-6b4dcf95dd-hhlnh\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.494669 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngx4q\" (UniqueName: \"kubernetes.io/projected/3d1703ff-c25a-490f-817e-025071475fb7-kube-api-access-ngx4q\") pod \"controller-manager-6b4dcf95dd-hhlnh\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.494710 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d1703ff-c25a-490f-817e-025071475fb7-proxy-ca-bundles\") pod \"controller-manager-6b4dcf95dd-hhlnh\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.494870 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d1703ff-c25a-490f-817e-025071475fb7-client-ca\") pod \"controller-manager-6b4dcf95dd-hhlnh\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.494896 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d1703ff-c25a-490f-817e-025071475fb7-serving-cert\") pod \"controller-manager-6b4dcf95dd-hhlnh\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.595375 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d1703ff-c25a-490f-817e-025071475fb7-config\") pod \"controller-manager-6b4dcf95dd-hhlnh\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.595448 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngx4q\" (UniqueName: \"kubernetes.io/projected/3d1703ff-c25a-490f-817e-025071475fb7-kube-api-access-ngx4q\") pod \"controller-manager-6b4dcf95dd-hhlnh\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.595481 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d1703ff-c25a-490f-817e-025071475fb7-proxy-ca-bundles\") pod \"controller-manager-6b4dcf95dd-hhlnh\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.595540 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d1703ff-c25a-490f-817e-025071475fb7-client-ca\") pod \"controller-manager-6b4dcf95dd-hhlnh\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.595586 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d1703ff-c25a-490f-817e-025071475fb7-serving-cert\") pod \"controller-manager-6b4dcf95dd-hhlnh\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.597112 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d1703ff-c25a-490f-817e-025071475fb7-client-ca\") pod \"controller-manager-6b4dcf95dd-hhlnh\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.597256 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d1703ff-c25a-490f-817e-025071475fb7-proxy-ca-bundles\") pod \"controller-manager-6b4dcf95dd-hhlnh\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.597307 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d1703ff-c25a-490f-817e-025071475fb7-config\") pod \"controller-manager-6b4dcf95dd-hhlnh\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.603359 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d1703ff-c25a-490f-817e-025071475fb7-serving-cert\") pod \"controller-manager-6b4dcf95dd-hhlnh\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.614193 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngx4q\" (UniqueName: \"kubernetes.io/projected/3d1703ff-c25a-490f-817e-025071475fb7-kube-api-access-ngx4q\") pod \"controller-manager-6b4dcf95dd-hhlnh\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.636116 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-slxp5" event={"ID":"83605c82-2947-4e84-8657-e9d040571dae","Type":"ContainerStarted","Data":"5aed9a2179e9b384763b1e5cfa5f6a81f95bb18e6f32a5e548560e5cdbf25a0e"} Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.639248 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56mp9" event={"ID":"46100c6d-7c73-47b9-a30e-6409aabaf9db","Type":"ContainerStarted","Data":"ee5ff5560ef50d344925ce4e2ce9811d0aea437332186422481ab0cab54490f8"} Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.641969 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2mq9" event={"ID":"de1dad58-7ec1-4867-8494-044120bf894b","Type":"ContainerStarted","Data":"25b2ce87cf86990b0ce19b298827906d83d6827b9e6f66983e0e57bee66439d3"} Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.644526 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdtx8" event={"ID":"fd9447f0-10ac-4740-a5a6-618b454b89be","Type":"ContainerStarted","Data":"ff3c18a1b63200b1a16776893de9f43040969b6fd97c868419f86ff4922623cb"} Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.646948 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dphcj" event={"ID":"9cad7956-ecee-468f-9fbe-b8a99a646cfb","Type":"ContainerStarted","Data":"8d7fcb90103a5439cb0bda20846cea28c404f51d9754e6dffb63d7a595fdce8e"} Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.662404 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-slxp5" podStartSLOduration=2.581044683 podStartE2EDuration="39.662384545s" podCreationTimestamp="2026-01-07 03:35:00 +0000 UTC" firstStartedPulling="2026-01-07 03:35:02.060004318 +0000 UTC m=+148.625699063" lastFinishedPulling="2026-01-07 03:35:39.14134419 +0000 UTC m=+185.707038925" observedRunningTime="2026-01-07 03:35:39.653592586 +0000 UTC m=+186.219287321" watchObservedRunningTime="2026-01-07 03:35:39.662384545 +0000 UTC m=+186.228079280" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.675963 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xdtx8" podStartSLOduration=3.035065489 podStartE2EDuration="37.675940439s" podCreationTimestamp="2026-01-07 03:35:02 +0000 UTC" firstStartedPulling="2026-01-07 03:35:04.168994903 +0000 UTC m=+150.734689638" lastFinishedPulling="2026-01-07 03:35:38.809869853 +0000 UTC m=+185.375564588" observedRunningTime="2026-01-07 03:35:39.672126092 +0000 UTC m=+186.237820827" watchObservedRunningTime="2026-01-07 03:35:39.675940439 +0000 UTC m=+186.241635164" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.691684 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dphcj" podStartSLOduration=2.9082561780000002 podStartE2EDuration="39.69166447s" podCreationTimestamp="2026-01-07 03:35:00 +0000 UTC" firstStartedPulling="2026-01-07 03:35:02.100367262 +0000 UTC m=+148.666061997" lastFinishedPulling="2026-01-07 03:35:38.883775554 +0000 UTC m=+185.449470289" observedRunningTime="2026-01-07 03:35:39.688676768 +0000 UTC m=+186.254371503" watchObservedRunningTime="2026-01-07 03:35:39.69166447 +0000 UTC m=+186.257359205" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.710962 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-56mp9" podStartSLOduration=3.015878773 podStartE2EDuration="36.710922998s" podCreationTimestamp="2026-01-07 03:35:03 +0000 UTC" firstStartedPulling="2026-01-07 03:35:05.201766407 +0000 UTC m=+151.767461142" lastFinishedPulling="2026-01-07 03:35:38.896810642 +0000 UTC m=+185.462505367" observedRunningTime="2026-01-07 03:35:39.705509713 +0000 UTC m=+186.271204458" watchObservedRunningTime="2026-01-07 03:35:39.710922998 +0000 UTC m=+186.276617723" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.724715 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k2mq9" podStartSLOduration=3.033900525 podStartE2EDuration="36.72470077s" podCreationTimestamp="2026-01-07 03:35:03 +0000 UTC" firstStartedPulling="2026-01-07 03:35:05.216875369 +0000 UTC m=+151.782570094" lastFinishedPulling="2026-01-07 03:35:38.907675604 +0000 UTC m=+185.473370339" observedRunningTime="2026-01-07 03:35:39.722920806 +0000 UTC m=+186.288615551" watchObservedRunningTime="2026-01-07 03:35:39.72470077 +0000 UTC m=+186.290395505" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.731001 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:39 crc kubenswrapper[4980]: I0107 03:35:39.928680 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh"] Jan 07 03:35:39 crc kubenswrapper[4980]: W0107 03:35:39.939532 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d1703ff_c25a_490f_817e_025071475fb7.slice/crio-737ff0c7e166da43dd407ff696a40018d3937e468946604f44be20f54d811647 WatchSource:0}: Error finding container 737ff0c7e166da43dd407ff696a40018d3937e468946604f44be20f54d811647: Status 404 returned error can't find the container with id 737ff0c7e166da43dd407ff696a40018d3937e468946604f44be20f54d811647 Jan 07 03:35:40 crc kubenswrapper[4980]: I0107 03:35:40.352637 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-slxp5" Jan 07 03:35:40 crc kubenswrapper[4980]: I0107 03:35:40.352752 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-slxp5" Jan 07 03:35:40 crc kubenswrapper[4980]: I0107 03:35:40.578739 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vtrzt"] Jan 07 03:35:40 crc kubenswrapper[4980]: I0107 03:35:40.653132 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" event={"ID":"3d1703ff-c25a-490f-817e-025071475fb7","Type":"ContainerStarted","Data":"a8ebcf8e3d12d62ce70f05ce6bc9b9abab3f94fd6f8c7ce247f38e14aba4846c"} Jan 07 03:35:40 crc kubenswrapper[4980]: I0107 03:35:40.653229 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" event={"ID":"3d1703ff-c25a-490f-817e-025071475fb7","Type":"ContainerStarted","Data":"737ff0c7e166da43dd407ff696a40018d3937e468946604f44be20f54d811647"} Jan 07 03:35:40 crc kubenswrapper[4980]: I0107 03:35:40.669767 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 07 03:35:40 crc kubenswrapper[4980]: I0107 03:35:40.677626 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" podStartSLOduration=5.677611491 podStartE2EDuration="5.677611491s" podCreationTimestamp="2026-01-07 03:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:35:40.67659312 +0000 UTC m=+187.242287865" watchObservedRunningTime="2026-01-07 03:35:40.677611491 +0000 UTC m=+187.243306226" Jan 07 03:35:40 crc kubenswrapper[4980]: I0107 03:35:40.811570 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dphcj" Jan 07 03:35:40 crc kubenswrapper[4980]: I0107 03:35:40.811627 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dphcj" Jan 07 03:35:40 crc kubenswrapper[4980]: I0107 03:35:40.894182 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 07 03:35:40 crc kubenswrapper[4980]: I0107 03:35:40.895407 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 07 03:35:40 crc kubenswrapper[4980]: I0107 03:35:40.897996 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 07 03:35:40 crc kubenswrapper[4980]: I0107 03:35:40.898312 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 07 03:35:40 crc kubenswrapper[4980]: I0107 03:35:40.907858 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 07 03:35:40 crc kubenswrapper[4980]: I0107 03:35:40.916378 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ea83d41-05a1-4383-ae13-345f3888a7b2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7ea83d41-05a1-4383-ae13-345f3888a7b2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 07 03:35:40 crc kubenswrapper[4980]: I0107 03:35:40.916456 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ea83d41-05a1-4383-ae13-345f3888a7b2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7ea83d41-05a1-4383-ae13-345f3888a7b2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 07 03:35:41 crc kubenswrapper[4980]: I0107 03:35:41.018271 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ea83d41-05a1-4383-ae13-345f3888a7b2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7ea83d41-05a1-4383-ae13-345f3888a7b2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 07 03:35:41 crc kubenswrapper[4980]: I0107 03:35:41.018374 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ea83d41-05a1-4383-ae13-345f3888a7b2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7ea83d41-05a1-4383-ae13-345f3888a7b2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 07 03:35:41 crc kubenswrapper[4980]: I0107 03:35:41.018478 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ea83d41-05a1-4383-ae13-345f3888a7b2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7ea83d41-05a1-4383-ae13-345f3888a7b2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 07 03:35:41 crc kubenswrapper[4980]: I0107 03:35:41.038577 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ea83d41-05a1-4383-ae13-345f3888a7b2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7ea83d41-05a1-4383-ae13-345f3888a7b2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 07 03:35:41 crc kubenswrapper[4980]: I0107 03:35:41.231435 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 07 03:35:41 crc kubenswrapper[4980]: I0107 03:35:41.427463 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-slxp5" podUID="83605c82-2947-4e84-8657-e9d040571dae" containerName="registry-server" probeResult="failure" output=< Jan 07 03:35:41 crc kubenswrapper[4980]: timeout: failed to connect service ":50051" within 1s Jan 07 03:35:41 crc kubenswrapper[4980]: > Jan 07 03:35:41 crc kubenswrapper[4980]: I0107 03:35:41.660836 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:41 crc kubenswrapper[4980]: I0107 03:35:41.665379 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:41 crc kubenswrapper[4980]: I0107 03:35:41.719373 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 07 03:35:41 crc kubenswrapper[4980]: I0107 03:35:41.866398 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-dphcj" podUID="9cad7956-ecee-468f-9fbe-b8a99a646cfb" containerName="registry-server" probeResult="failure" output=< Jan 07 03:35:41 crc kubenswrapper[4980]: timeout: failed to connect service ":50051" within 1s Jan 07 03:35:41 crc kubenswrapper[4980]: > Jan 07 03:35:42 crc kubenswrapper[4980]: I0107 03:35:42.576933 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xdtx8" Jan 07 03:35:42 crc kubenswrapper[4980]: I0107 03:35:42.577284 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xdtx8" Jan 07 03:35:42 crc kubenswrapper[4980]: I0107 03:35:42.667347 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7ea83d41-05a1-4383-ae13-345f3888a7b2","Type":"ContainerStarted","Data":"a6b35c861d69c86669b6160abeb40835a8a959cfdfaa1a929f3c75d5c212d3ce"} Jan 07 03:35:42 crc kubenswrapper[4980]: I0107 03:35:42.673510 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xdtx8" Jan 07 03:35:43 crc kubenswrapper[4980]: I0107 03:35:43.376187 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k2mq9" Jan 07 03:35:43 crc kubenswrapper[4980]: I0107 03:35:43.376260 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k2mq9" Jan 07 03:35:43 crc kubenswrapper[4980]: I0107 03:35:43.674402 4980 generic.go:334] "Generic (PLEG): container finished" podID="7ea83d41-05a1-4383-ae13-345f3888a7b2" containerID="2f76697ccc763ac15f93b9cf853a7db6a5221092d96df0bb429351c9320236ce" exitCode=0 Jan 07 03:35:43 crc kubenswrapper[4980]: I0107 03:35:43.675390 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7ea83d41-05a1-4383-ae13-345f3888a7b2","Type":"ContainerDied","Data":"2f76697ccc763ac15f93b9cf853a7db6a5221092d96df0bb429351c9320236ce"} Jan 07 03:35:43 crc kubenswrapper[4980]: I0107 03:35:43.779669 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-56mp9" Jan 07 03:35:43 crc kubenswrapper[4980]: I0107 03:35:43.779724 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-56mp9" Jan 07 03:35:44 crc kubenswrapper[4980]: I0107 03:35:44.436373 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k2mq9" podUID="de1dad58-7ec1-4867-8494-044120bf894b" containerName="registry-server" probeResult="failure" output=< Jan 07 03:35:44 crc kubenswrapper[4980]: timeout: failed to connect service ":50051" within 1s Jan 07 03:35:44 crc kubenswrapper[4980]: > Jan 07 03:35:44 crc kubenswrapper[4980]: I0107 03:35:44.818105 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-56mp9" podUID="46100c6d-7c73-47b9-a30e-6409aabaf9db" containerName="registry-server" probeResult="failure" output=< Jan 07 03:35:44 crc kubenswrapper[4980]: timeout: failed to connect service ":50051" within 1s Jan 07 03:35:44 crc kubenswrapper[4980]: > Jan 07 03:35:44 crc kubenswrapper[4980]: I0107 03:35:44.971206 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 07 03:35:44 crc kubenswrapper[4980]: I0107 03:35:44.974614 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ea83d41-05a1-4383-ae13-345f3888a7b2-kubelet-dir\") pod \"7ea83d41-05a1-4383-ae13-345f3888a7b2\" (UID: \"7ea83d41-05a1-4383-ae13-345f3888a7b2\") " Jan 07 03:35:44 crc kubenswrapper[4980]: I0107 03:35:44.974757 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ea83d41-05a1-4383-ae13-345f3888a7b2-kube-api-access\") pod \"7ea83d41-05a1-4383-ae13-345f3888a7b2\" (UID: \"7ea83d41-05a1-4383-ae13-345f3888a7b2\") " Jan 07 03:35:44 crc kubenswrapper[4980]: I0107 03:35:44.974749 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ea83d41-05a1-4383-ae13-345f3888a7b2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7ea83d41-05a1-4383-ae13-345f3888a7b2" (UID: "7ea83d41-05a1-4383-ae13-345f3888a7b2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:35:44 crc kubenswrapper[4980]: I0107 03:35:44.975008 4980 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ea83d41-05a1-4383-ae13-345f3888a7b2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:44 crc kubenswrapper[4980]: I0107 03:35:44.980872 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea83d41-05a1-4383-ae13-345f3888a7b2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7ea83d41-05a1-4383-ae13-345f3888a7b2" (UID: "7ea83d41-05a1-4383-ae13-345f3888a7b2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.076274 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ea83d41-05a1-4383-ae13-345f3888a7b2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.491745 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 07 03:35:45 crc kubenswrapper[4980]: E0107 03:35:45.492678 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea83d41-05a1-4383-ae13-345f3888a7b2" containerName="pruner" Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.492697 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea83d41-05a1-4383-ae13-345f3888a7b2" containerName="pruner" Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.492846 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea83d41-05a1-4383-ae13-345f3888a7b2" containerName="pruner" Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.493423 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.503304 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.583302 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.583384 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af-kube-api-access\") pod \"installer-9-crc\" (UID: \"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.583522 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af-var-lock\") pod \"installer-9-crc\" (UID: \"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.684440 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.684520 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af-kube-api-access\") pod \"installer-9-crc\" (UID: \"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.684543 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af-var-lock\") pod \"installer-9-crc\" (UID: \"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.684643 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af-var-lock\") pod \"installer-9-crc\" (UID: \"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.684712 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.684753 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7ea83d41-05a1-4383-ae13-345f3888a7b2","Type":"ContainerDied","Data":"a6b35c861d69c86669b6160abeb40835a8a959cfdfaa1a929f3c75d5c212d3ce"} Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.684799 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6b35c861d69c86669b6160abeb40835a8a959cfdfaa1a929f3c75d5c212d3ce" Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.684875 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.714544 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af-kube-api-access\") pod \"installer-9-crc\" (UID: \"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 07 03:35:45 crc kubenswrapper[4980]: I0107 03:35:45.870235 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 07 03:35:46 crc kubenswrapper[4980]: I0107 03:35:46.285202 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 07 03:35:46 crc kubenswrapper[4980]: W0107 03:35:46.290733 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9ac6db5a_85f4_4134_bf3b_2ecbdbbd44af.slice/crio-4cc4c8076ef2368f5f7436e22dee001fa3d5189fdbb32df42d3d2934e956ae13 WatchSource:0}: Error finding container 4cc4c8076ef2368f5f7436e22dee001fa3d5189fdbb32df42d3d2934e956ae13: Status 404 returned error can't find the container with id 4cc4c8076ef2368f5f7436e22dee001fa3d5189fdbb32df42d3d2934e956ae13 Jan 07 03:35:46 crc kubenswrapper[4980]: I0107 03:35:46.694292 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af","Type":"ContainerStarted","Data":"4cc4c8076ef2368f5f7436e22dee001fa3d5189fdbb32df42d3d2934e956ae13"} Jan 07 03:35:48 crc kubenswrapper[4980]: I0107 03:35:48.708241 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af","Type":"ContainerStarted","Data":"3f1ebfbb6efecc52be8acf28de15d7eb285296a226342e9f1423990a913d1c5f"} Jan 07 03:35:48 crc kubenswrapper[4980]: I0107 03:35:48.722202 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.7221840950000002 podStartE2EDuration="3.722184095s" podCreationTimestamp="2026-01-07 03:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:35:48.721743811 +0000 UTC m=+195.287438546" watchObservedRunningTime="2026-01-07 03:35:48.722184095 +0000 UTC m=+195.287878840" Jan 07 03:35:50 crc kubenswrapper[4980]: I0107 03:35:50.429076 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-slxp5" Jan 07 03:35:50 crc kubenswrapper[4980]: I0107 03:35:50.514716 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-slxp5" Jan 07 03:35:50 crc kubenswrapper[4980]: I0107 03:35:50.865958 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dphcj" Jan 07 03:35:50 crc kubenswrapper[4980]: I0107 03:35:50.920743 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dphcj" Jan 07 03:35:51 crc kubenswrapper[4980]: I0107 03:35:51.669012 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dphcj"] Jan 07 03:35:52 crc kubenswrapper[4980]: I0107 03:35:52.647742 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xdtx8" Jan 07 03:35:52 crc kubenswrapper[4980]: I0107 03:35:52.729759 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dphcj" podUID="9cad7956-ecee-468f-9fbe-b8a99a646cfb" containerName="registry-server" containerID="cri-o://8d7fcb90103a5439cb0bda20846cea28c404f51d9754e6dffb63d7a595fdce8e" gracePeriod=2 Jan 07 03:35:53 crc kubenswrapper[4980]: I0107 03:35:53.443975 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k2mq9" Jan 07 03:35:53 crc kubenswrapper[4980]: I0107 03:35:53.496477 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k2mq9" Jan 07 03:35:53 crc kubenswrapper[4980]: I0107 03:35:53.741311 4980 generic.go:334] "Generic (PLEG): container finished" podID="9cad7956-ecee-468f-9fbe-b8a99a646cfb" containerID="8d7fcb90103a5439cb0bda20846cea28c404f51d9754e6dffb63d7a595fdce8e" exitCode=0 Jan 07 03:35:53 crc kubenswrapper[4980]: I0107 03:35:53.741387 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dphcj" event={"ID":"9cad7956-ecee-468f-9fbe-b8a99a646cfb","Type":"ContainerDied","Data":"8d7fcb90103a5439cb0bda20846cea28c404f51d9754e6dffb63d7a595fdce8e"} Jan 07 03:35:53 crc kubenswrapper[4980]: I0107 03:35:53.819418 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-56mp9" Jan 07 03:35:53 crc kubenswrapper[4980]: I0107 03:35:53.863948 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-56mp9" Jan 07 03:35:54 crc kubenswrapper[4980]: I0107 03:35:54.908474 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dphcj" Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.032041 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmpcf\" (UniqueName: \"kubernetes.io/projected/9cad7956-ecee-468f-9fbe-b8a99a646cfb-kube-api-access-vmpcf\") pod \"9cad7956-ecee-468f-9fbe-b8a99a646cfb\" (UID: \"9cad7956-ecee-468f-9fbe-b8a99a646cfb\") " Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.032179 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cad7956-ecee-468f-9fbe-b8a99a646cfb-catalog-content\") pod \"9cad7956-ecee-468f-9fbe-b8a99a646cfb\" (UID: \"9cad7956-ecee-468f-9fbe-b8a99a646cfb\") " Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.032228 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cad7956-ecee-468f-9fbe-b8a99a646cfb-utilities\") pod \"9cad7956-ecee-468f-9fbe-b8a99a646cfb\" (UID: \"9cad7956-ecee-468f-9fbe-b8a99a646cfb\") " Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.033988 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cad7956-ecee-468f-9fbe-b8a99a646cfb-utilities" (OuterVolumeSpecName: "utilities") pod "9cad7956-ecee-468f-9fbe-b8a99a646cfb" (UID: "9cad7956-ecee-468f-9fbe-b8a99a646cfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.042311 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cad7956-ecee-468f-9fbe-b8a99a646cfb-kube-api-access-vmpcf" (OuterVolumeSpecName: "kube-api-access-vmpcf") pod "9cad7956-ecee-468f-9fbe-b8a99a646cfb" (UID: "9cad7956-ecee-468f-9fbe-b8a99a646cfb"). InnerVolumeSpecName "kube-api-access-vmpcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.106143 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cad7956-ecee-468f-9fbe-b8a99a646cfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cad7956-ecee-468f-9fbe-b8a99a646cfb" (UID: "9cad7956-ecee-468f-9fbe-b8a99a646cfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.134104 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmpcf\" (UniqueName: \"kubernetes.io/projected/9cad7956-ecee-468f-9fbe-b8a99a646cfb-kube-api-access-vmpcf\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.134169 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cad7956-ecee-468f-9fbe-b8a99a646cfb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.134186 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cad7956-ecee-468f-9fbe-b8a99a646cfb-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.462594 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh"] Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.463040 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" podUID="3d1703ff-c25a-490f-817e-025071475fb7" containerName="controller-manager" containerID="cri-o://a8ebcf8e3d12d62ce70f05ce6bc9b9abab3f94fd6f8c7ce247f38e14aba4846c" gracePeriod=30 Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.484262 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k"] Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.484588 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" podUID="fe73edac-2403-4210-b1ac-3ac701b91f37" containerName="route-controller-manager" containerID="cri-o://68af06f50a0830afb0156f8b502baf93162d31632625b0eb9b2105d609912893" gracePeriod=30 Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.755683 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dphcj" event={"ID":"9cad7956-ecee-468f-9fbe-b8a99a646cfb","Type":"ContainerDied","Data":"5a242f59c47029492a789d1a0c54ef259a5f15de7bd120d6ba2b2e708481d716"} Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.756194 4980 scope.go:117] "RemoveContainer" containerID="8d7fcb90103a5439cb0bda20846cea28c404f51d9754e6dffb63d7a595fdce8e" Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.755795 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dphcj" Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.772769 4980 scope.go:117] "RemoveContainer" containerID="2663090ede279f7279e2859cd39198e0d056f748e021bd861f888e9d2b0dcf0c" Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.801585 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dphcj"] Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.805129 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dphcj"] Jan 07 03:35:55 crc kubenswrapper[4980]: I0107 03:35:55.832817 4980 scope.go:117] "RemoveContainer" containerID="e0aa69d618effa9483d5a8cae443a9262c3febf197b52708a9e6fcbfb4fde0fe" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.069247 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdtx8"] Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.070122 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xdtx8" podUID="fd9447f0-10ac-4740-a5a6-618b454b89be" containerName="registry-server" containerID="cri-o://ff3c18a1b63200b1a16776893de9f43040969b6fd97c868419f86ff4922623cb" gracePeriod=2 Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.265521 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56mp9"] Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.265878 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-56mp9" podUID="46100c6d-7c73-47b9-a30e-6409aabaf9db" containerName="registry-server" containerID="cri-o://ee5ff5560ef50d344925ce4e2ce9811d0aea437332186422481ab0cab54490f8" gracePeriod=2 Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.621118 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.623774 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.683670 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9bf884b85-ml7cd"] Jan 07 03:35:56 crc kubenswrapper[4980]: E0107 03:35:56.685484 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cad7956-ecee-468f-9fbe-b8a99a646cfb" containerName="extract-content" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.685530 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cad7956-ecee-468f-9fbe-b8a99a646cfb" containerName="extract-content" Jan 07 03:35:56 crc kubenswrapper[4980]: E0107 03:35:56.685585 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cad7956-ecee-468f-9fbe-b8a99a646cfb" containerName="registry-server" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.685594 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cad7956-ecee-468f-9fbe-b8a99a646cfb" containerName="registry-server" Jan 07 03:35:56 crc kubenswrapper[4980]: E0107 03:35:56.685608 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d1703ff-c25a-490f-817e-025071475fb7" containerName="controller-manager" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.685617 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d1703ff-c25a-490f-817e-025071475fb7" containerName="controller-manager" Jan 07 03:35:56 crc kubenswrapper[4980]: E0107 03:35:56.685635 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe73edac-2403-4210-b1ac-3ac701b91f37" containerName="route-controller-manager" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.685642 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe73edac-2403-4210-b1ac-3ac701b91f37" containerName="route-controller-manager" Jan 07 03:35:56 crc kubenswrapper[4980]: E0107 03:35:56.685651 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cad7956-ecee-468f-9fbe-b8a99a646cfb" containerName="extract-utilities" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.685659 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cad7956-ecee-468f-9fbe-b8a99a646cfb" containerName="extract-utilities" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.685858 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe73edac-2403-4210-b1ac-3ac701b91f37" containerName="route-controller-manager" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.685878 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d1703ff-c25a-490f-817e-025071475fb7" containerName="controller-manager" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.685895 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cad7956-ecee-468f-9fbe-b8a99a646cfb" containerName="registry-server" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.687360 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.693444 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9bf884b85-ml7cd"] Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.764085 4980 generic.go:334] "Generic (PLEG): container finished" podID="3d1703ff-c25a-490f-817e-025071475fb7" containerID="a8ebcf8e3d12d62ce70f05ce6bc9b9abab3f94fd6f8c7ce247f38e14aba4846c" exitCode=0 Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.764144 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" event={"ID":"3d1703ff-c25a-490f-817e-025071475fb7","Type":"ContainerDied","Data":"a8ebcf8e3d12d62ce70f05ce6bc9b9abab3f94fd6f8c7ce247f38e14aba4846c"} Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.764169 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" event={"ID":"3d1703ff-c25a-490f-817e-025071475fb7","Type":"ContainerDied","Data":"737ff0c7e166da43dd407ff696a40018d3937e468946604f44be20f54d811647"} Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.764185 4980 scope.go:117] "RemoveContainer" containerID="a8ebcf8e3d12d62ce70f05ce6bc9b9abab3f94fd6f8c7ce247f38e14aba4846c" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.764518 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.766648 4980 generic.go:334] "Generic (PLEG): container finished" podID="16999f3a-e9cd-449c-bb6f-72b7759cb32e" containerID="d228d6b3f5aa638b93530984e3b65752cc949e02c3fed88fd9be91fbbae0bc40" exitCode=0 Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.766688 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c5kx" event={"ID":"16999f3a-e9cd-449c-bb6f-72b7759cb32e","Type":"ContainerDied","Data":"d228d6b3f5aa638b93530984e3b65752cc949e02c3fed88fd9be91fbbae0bc40"} Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.768301 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngx4q\" (UniqueName: \"kubernetes.io/projected/3d1703ff-c25a-490f-817e-025071475fb7-kube-api-access-ngx4q\") pod \"3d1703ff-c25a-490f-817e-025071475fb7\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.768371 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpmq4\" (UniqueName: \"kubernetes.io/projected/fe73edac-2403-4210-b1ac-3ac701b91f37-kube-api-access-hpmq4\") pod \"fe73edac-2403-4210-b1ac-3ac701b91f37\" (UID: \"fe73edac-2403-4210-b1ac-3ac701b91f37\") " Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.768421 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe73edac-2403-4210-b1ac-3ac701b91f37-serving-cert\") pod \"fe73edac-2403-4210-b1ac-3ac701b91f37\" (UID: \"fe73edac-2403-4210-b1ac-3ac701b91f37\") " Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.768537 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe73edac-2403-4210-b1ac-3ac701b91f37-config\") pod \"fe73edac-2403-4210-b1ac-3ac701b91f37\" (UID: \"fe73edac-2403-4210-b1ac-3ac701b91f37\") " Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.768584 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d1703ff-c25a-490f-817e-025071475fb7-proxy-ca-bundles\") pod \"3d1703ff-c25a-490f-817e-025071475fb7\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.769309 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe73edac-2403-4210-b1ac-3ac701b91f37-client-ca\") pod \"fe73edac-2403-4210-b1ac-3ac701b91f37\" (UID: \"fe73edac-2403-4210-b1ac-3ac701b91f37\") " Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.769369 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d1703ff-c25a-490f-817e-025071475fb7-client-ca\") pod \"3d1703ff-c25a-490f-817e-025071475fb7\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.769390 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d1703ff-c25a-490f-817e-025071475fb7-config\") pod \"3d1703ff-c25a-490f-817e-025071475fb7\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.769433 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d1703ff-c25a-490f-817e-025071475fb7-serving-cert\") pod \"3d1703ff-c25a-490f-817e-025071475fb7\" (UID: \"3d1703ff-c25a-490f-817e-025071475fb7\") " Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.769858 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x4vg\" (UniqueName: \"kubernetes.io/projected/bb59366e-e7ec-44b4-ae76-ea152401a3c4-kube-api-access-2x4vg\") pod \"controller-manager-9bf884b85-ml7cd\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.769893 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb59366e-e7ec-44b4-ae76-ea152401a3c4-serving-cert\") pod \"controller-manager-9bf884b85-ml7cd\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.769918 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb59366e-e7ec-44b4-ae76-ea152401a3c4-client-ca\") pod \"controller-manager-9bf884b85-ml7cd\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.769969 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb59366e-e7ec-44b4-ae76-ea152401a3c4-proxy-ca-bundles\") pod \"controller-manager-9bf884b85-ml7cd\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.769996 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb59366e-e7ec-44b4-ae76-ea152401a3c4-config\") pod \"controller-manager-9bf884b85-ml7cd\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.770030 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe73edac-2403-4210-b1ac-3ac701b91f37-client-ca" (OuterVolumeSpecName: "client-ca") pod "fe73edac-2403-4210-b1ac-3ac701b91f37" (UID: "fe73edac-2403-4210-b1ac-3ac701b91f37"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.770663 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d1703ff-c25a-490f-817e-025071475fb7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3d1703ff-c25a-490f-817e-025071475fb7" (UID: "3d1703ff-c25a-490f-817e-025071475fb7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.771210 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d1703ff-c25a-490f-817e-025071475fb7-config" (OuterVolumeSpecName: "config") pod "3d1703ff-c25a-490f-817e-025071475fb7" (UID: "3d1703ff-c25a-490f-817e-025071475fb7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.772744 4980 generic.go:334] "Generic (PLEG): container finished" podID="46100c6d-7c73-47b9-a30e-6409aabaf9db" containerID="ee5ff5560ef50d344925ce4e2ce9811d0aea437332186422481ab0cab54490f8" exitCode=0 Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.772833 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56mp9" event={"ID":"46100c6d-7c73-47b9-a30e-6409aabaf9db","Type":"ContainerDied","Data":"ee5ff5560ef50d344925ce4e2ce9811d0aea437332186422481ab0cab54490f8"} Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.772894 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56mp9" event={"ID":"46100c6d-7c73-47b9-a30e-6409aabaf9db","Type":"ContainerDied","Data":"99a3e6f9ac317eebaf3908b87693a531692f47c70648007d83ee72994fe0ef03"} Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.772908 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99a3e6f9ac317eebaf3908b87693a531692f47c70648007d83ee72994fe0ef03" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.773302 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe73edac-2403-4210-b1ac-3ac701b91f37-config" (OuterVolumeSpecName: "config") pod "fe73edac-2403-4210-b1ac-3ac701b91f37" (UID: "fe73edac-2403-4210-b1ac-3ac701b91f37"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.776522 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe73edac-2403-4210-b1ac-3ac701b91f37-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fe73edac-2403-4210-b1ac-3ac701b91f37" (UID: "fe73edac-2403-4210-b1ac-3ac701b91f37"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.778636 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d1703ff-c25a-490f-817e-025071475fb7-client-ca" (OuterVolumeSpecName: "client-ca") pod "3d1703ff-c25a-490f-817e-025071475fb7" (UID: "3d1703ff-c25a-490f-817e-025071475fb7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.780041 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe73edac-2403-4210-b1ac-3ac701b91f37-kube-api-access-hpmq4" (OuterVolumeSpecName: "kube-api-access-hpmq4") pod "fe73edac-2403-4210-b1ac-3ac701b91f37" (UID: "fe73edac-2403-4210-b1ac-3ac701b91f37"). InnerVolumeSpecName "kube-api-access-hpmq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.780294 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d1703ff-c25a-490f-817e-025071475fb7-kube-api-access-ngx4q" (OuterVolumeSpecName: "kube-api-access-ngx4q") pod "3d1703ff-c25a-490f-817e-025071475fb7" (UID: "3d1703ff-c25a-490f-817e-025071475fb7"). InnerVolumeSpecName "kube-api-access-ngx4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.781942 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1703ff-c25a-490f-817e-025071475fb7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3d1703ff-c25a-490f-817e-025071475fb7" (UID: "3d1703ff-c25a-490f-817e-025071475fb7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.782431 4980 generic.go:334] "Generic (PLEG): container finished" podID="fe73edac-2403-4210-b1ac-3ac701b91f37" containerID="68af06f50a0830afb0156f8b502baf93162d31632625b0eb9b2105d609912893" exitCode=0 Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.782519 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" event={"ID":"fe73edac-2403-4210-b1ac-3ac701b91f37","Type":"ContainerDied","Data":"68af06f50a0830afb0156f8b502baf93162d31632625b0eb9b2105d609912893"} Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.782549 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" event={"ID":"fe73edac-2403-4210-b1ac-3ac701b91f37","Type":"ContainerDied","Data":"9ea197a662057d058c30a936e1f67a542c1571d615f49abd3c4c35a4ec4995d3"} Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.782693 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.795937 4980 generic.go:334] "Generic (PLEG): container finished" podID="52350cdf-a327-410e-89cf-6666175b6ddc" containerID="08b52504774cea3b8fe2d4e62e93d02be7267051418980b57b0068e0adffc3c1" exitCode=0 Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.796158 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42gz5" event={"ID":"52350cdf-a327-410e-89cf-6666175b6ddc","Type":"ContainerDied","Data":"08b52504774cea3b8fe2d4e62e93d02be7267051418980b57b0068e0adffc3c1"} Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.798845 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bfnr" event={"ID":"7749febc-7b8f-4a6b-96e8-2579f281cede","Type":"ContainerStarted","Data":"67352690022e49bcd22a0cdeaa7736e2f45bb4d902f2072638d88527f6878030"} Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.805258 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdtx8" event={"ID":"fd9447f0-10ac-4740-a5a6-618b454b89be","Type":"ContainerDied","Data":"ff3c18a1b63200b1a16776893de9f43040969b6fd97c868419f86ff4922623cb"} Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.805221 4980 generic.go:334] "Generic (PLEG): container finished" podID="fd9447f0-10ac-4740-a5a6-618b454b89be" containerID="ff3c18a1b63200b1a16776893de9f43040969b6fd97c868419f86ff4922623cb" exitCode=0 Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.871102 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb59366e-e7ec-44b4-ae76-ea152401a3c4-proxy-ca-bundles\") pod \"controller-manager-9bf884b85-ml7cd\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.871176 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb59366e-e7ec-44b4-ae76-ea152401a3c4-config\") pod \"controller-manager-9bf884b85-ml7cd\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.871224 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x4vg\" (UniqueName: \"kubernetes.io/projected/bb59366e-e7ec-44b4-ae76-ea152401a3c4-kube-api-access-2x4vg\") pod \"controller-manager-9bf884b85-ml7cd\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.871247 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb59366e-e7ec-44b4-ae76-ea152401a3c4-serving-cert\") pod \"controller-manager-9bf884b85-ml7cd\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.871273 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb59366e-e7ec-44b4-ae76-ea152401a3c4-client-ca\") pod \"controller-manager-9bf884b85-ml7cd\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.871400 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngx4q\" (UniqueName: \"kubernetes.io/projected/3d1703ff-c25a-490f-817e-025071475fb7-kube-api-access-ngx4q\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.871414 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpmq4\" (UniqueName: \"kubernetes.io/projected/fe73edac-2403-4210-b1ac-3ac701b91f37-kube-api-access-hpmq4\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.871427 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe73edac-2403-4210-b1ac-3ac701b91f37-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.871437 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe73edac-2403-4210-b1ac-3ac701b91f37-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.871447 4980 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d1703ff-c25a-490f-817e-025071475fb7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.871458 4980 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe73edac-2403-4210-b1ac-3ac701b91f37-client-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.871469 4980 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d1703ff-c25a-490f-817e-025071475fb7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.871479 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d1703ff-c25a-490f-817e-025071475fb7-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.871489 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d1703ff-c25a-490f-817e-025071475fb7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.872853 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb59366e-e7ec-44b4-ae76-ea152401a3c4-config\") pod \"controller-manager-9bf884b85-ml7cd\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.872933 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb59366e-e7ec-44b4-ae76-ea152401a3c4-client-ca\") pod \"controller-manager-9bf884b85-ml7cd\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.872928 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb59366e-e7ec-44b4-ae76-ea152401a3c4-proxy-ca-bundles\") pod \"controller-manager-9bf884b85-ml7cd\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.883721 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb59366e-e7ec-44b4-ae76-ea152401a3c4-serving-cert\") pod \"controller-manager-9bf884b85-ml7cd\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.887275 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x4vg\" (UniqueName: \"kubernetes.io/projected/bb59366e-e7ec-44b4-ae76-ea152401a3c4-kube-api-access-2x4vg\") pod \"controller-manager-9bf884b85-ml7cd\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.921410 4980 scope.go:117] "RemoveContainer" containerID="a8ebcf8e3d12d62ce70f05ce6bc9b9abab3f94fd6f8c7ce247f38e14aba4846c" Jan 07 03:35:56 crc kubenswrapper[4980]: E0107 03:35:56.922022 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8ebcf8e3d12d62ce70f05ce6bc9b9abab3f94fd6f8c7ce247f38e14aba4846c\": container with ID starting with a8ebcf8e3d12d62ce70f05ce6bc9b9abab3f94fd6f8c7ce247f38e14aba4846c not found: ID does not exist" containerID="a8ebcf8e3d12d62ce70f05ce6bc9b9abab3f94fd6f8c7ce247f38e14aba4846c" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.922072 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8ebcf8e3d12d62ce70f05ce6bc9b9abab3f94fd6f8c7ce247f38e14aba4846c"} err="failed to get container status \"a8ebcf8e3d12d62ce70f05ce6bc9b9abab3f94fd6f8c7ce247f38e14aba4846c\": rpc error: code = NotFound desc = could not find container \"a8ebcf8e3d12d62ce70f05ce6bc9b9abab3f94fd6f8c7ce247f38e14aba4846c\": container with ID starting with a8ebcf8e3d12d62ce70f05ce6bc9b9abab3f94fd6f8c7ce247f38e14aba4846c not found: ID does not exist" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.922103 4980 scope.go:117] "RemoveContainer" containerID="68af06f50a0830afb0156f8b502baf93162d31632625b0eb9b2105d609912893" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.933050 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56mp9" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.944228 4980 scope.go:117] "RemoveContainer" containerID="68af06f50a0830afb0156f8b502baf93162d31632625b0eb9b2105d609912893" Jan 07 03:35:56 crc kubenswrapper[4980]: E0107 03:35:56.944629 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68af06f50a0830afb0156f8b502baf93162d31632625b0eb9b2105d609912893\": container with ID starting with 68af06f50a0830afb0156f8b502baf93162d31632625b0eb9b2105d609912893 not found: ID does not exist" containerID="68af06f50a0830afb0156f8b502baf93162d31632625b0eb9b2105d609912893" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.944672 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68af06f50a0830afb0156f8b502baf93162d31632625b0eb9b2105d609912893"} err="failed to get container status \"68af06f50a0830afb0156f8b502baf93162d31632625b0eb9b2105d609912893\": rpc error: code = NotFound desc = could not find container \"68af06f50a0830afb0156f8b502baf93162d31632625b0eb9b2105d609912893\": container with ID starting with 68af06f50a0830afb0156f8b502baf93162d31632625b0eb9b2105d609912893 not found: ID does not exist" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.946096 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k"] Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.950333 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.950647 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57bf9bb4d8-vzs7k"] Jan 07 03:35:56 crc kubenswrapper[4980]: I0107 03:35:56.971854 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdtx8" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.077280 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9447f0-10ac-4740-a5a6-618b454b89be-utilities\") pod \"fd9447f0-10ac-4740-a5a6-618b454b89be\" (UID: \"fd9447f0-10ac-4740-a5a6-618b454b89be\") " Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.077346 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46100c6d-7c73-47b9-a30e-6409aabaf9db-utilities\") pod \"46100c6d-7c73-47b9-a30e-6409aabaf9db\" (UID: \"46100c6d-7c73-47b9-a30e-6409aabaf9db\") " Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.077410 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptbxm\" (UniqueName: \"kubernetes.io/projected/46100c6d-7c73-47b9-a30e-6409aabaf9db-kube-api-access-ptbxm\") pod \"46100c6d-7c73-47b9-a30e-6409aabaf9db\" (UID: \"46100c6d-7c73-47b9-a30e-6409aabaf9db\") " Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.077445 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9447f0-10ac-4740-a5a6-618b454b89be-catalog-content\") pod \"fd9447f0-10ac-4740-a5a6-618b454b89be\" (UID: \"fd9447f0-10ac-4740-a5a6-618b454b89be\") " Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.077619 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szg8z\" (UniqueName: \"kubernetes.io/projected/fd9447f0-10ac-4740-a5a6-618b454b89be-kube-api-access-szg8z\") pod \"fd9447f0-10ac-4740-a5a6-618b454b89be\" (UID: \"fd9447f0-10ac-4740-a5a6-618b454b89be\") " Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.077644 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46100c6d-7c73-47b9-a30e-6409aabaf9db-catalog-content\") pod \"46100c6d-7c73-47b9-a30e-6409aabaf9db\" (UID: \"46100c6d-7c73-47b9-a30e-6409aabaf9db\") " Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.078296 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46100c6d-7c73-47b9-a30e-6409aabaf9db-utilities" (OuterVolumeSpecName: "utilities") pod "46100c6d-7c73-47b9-a30e-6409aabaf9db" (UID: "46100c6d-7c73-47b9-a30e-6409aabaf9db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.078810 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd9447f0-10ac-4740-a5a6-618b454b89be-utilities" (OuterVolumeSpecName: "utilities") pod "fd9447f0-10ac-4740-a5a6-618b454b89be" (UID: "fd9447f0-10ac-4740-a5a6-618b454b89be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.083661 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd9447f0-10ac-4740-a5a6-618b454b89be-kube-api-access-szg8z" (OuterVolumeSpecName: "kube-api-access-szg8z") pod "fd9447f0-10ac-4740-a5a6-618b454b89be" (UID: "fd9447f0-10ac-4740-a5a6-618b454b89be"). InnerVolumeSpecName "kube-api-access-szg8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.084761 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46100c6d-7c73-47b9-a30e-6409aabaf9db-kube-api-access-ptbxm" (OuterVolumeSpecName: "kube-api-access-ptbxm") pod "46100c6d-7c73-47b9-a30e-6409aabaf9db" (UID: "46100c6d-7c73-47b9-a30e-6409aabaf9db"). InnerVolumeSpecName "kube-api-access-ptbxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.103404 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh"] Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.107255 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b4dcf95dd-hhlnh"] Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.131838 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd9447f0-10ac-4740-a5a6-618b454b89be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd9447f0-10ac-4740-a5a6-618b454b89be" (UID: "fd9447f0-10ac-4740-a5a6-618b454b89be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.178629 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46100c6d-7c73-47b9-a30e-6409aabaf9db-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.178670 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptbxm\" (UniqueName: \"kubernetes.io/projected/46100c6d-7c73-47b9-a30e-6409aabaf9db-kube-api-access-ptbxm\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.178682 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9447f0-10ac-4740-a5a6-618b454b89be-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.178691 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szg8z\" (UniqueName: \"kubernetes.io/projected/fd9447f0-10ac-4740-a5a6-618b454b89be-kube-api-access-szg8z\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.178700 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9447f0-10ac-4740-a5a6-618b454b89be-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.235668 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46100c6d-7c73-47b9-a30e-6409aabaf9db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46100c6d-7c73-47b9-a30e-6409aabaf9db" (UID: "46100c6d-7c73-47b9-a30e-6409aabaf9db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.279720 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46100c6d-7c73-47b9-a30e-6409aabaf9db-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.397985 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9bf884b85-ml7cd"] Jan 07 03:35:57 crc kubenswrapper[4980]: W0107 03:35:57.399340 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb59366e_e7ec_44b4_ae76_ea152401a3c4.slice/crio-f519f495d4295171203d969c532efc05387d4abe3d40fca59ad496b07dfc7088 WatchSource:0}: Error finding container f519f495d4295171203d969c532efc05387d4abe3d40fca59ad496b07dfc7088: Status 404 returned error can't find the container with id f519f495d4295171203d969c532efc05387d4abe3d40fca59ad496b07dfc7088 Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.746780 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d1703ff-c25a-490f-817e-025071475fb7" path="/var/lib/kubelet/pods/3d1703ff-c25a-490f-817e-025071475fb7/volumes" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.747974 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cad7956-ecee-468f-9fbe-b8a99a646cfb" path="/var/lib/kubelet/pods/9cad7956-ecee-468f-9fbe-b8a99a646cfb/volumes" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.749305 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe73edac-2403-4210-b1ac-3ac701b91f37" path="/var/lib/kubelet/pods/fe73edac-2403-4210-b1ac-3ac701b91f37/volumes" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.812664 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" event={"ID":"bb59366e-e7ec-44b4-ae76-ea152401a3c4","Type":"ContainerStarted","Data":"00f72de41c8a12fe7c9fdb5255de29f0e663d062dafc7c294a63116db09aaa5b"} Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.812722 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" event={"ID":"bb59366e-e7ec-44b4-ae76-ea152401a3c4","Type":"ContainerStarted","Data":"f519f495d4295171203d969c532efc05387d4abe3d40fca59ad496b07dfc7088"} Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.812927 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.816240 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42gz5" event={"ID":"52350cdf-a327-410e-89cf-6666175b6ddc","Type":"ContainerStarted","Data":"3fb83fa1b8b40e0ed5c6c4b4c673051eeb7963aabd4e6ef2fc46d8884ad96791"} Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.817923 4980 generic.go:334] "Generic (PLEG): container finished" podID="7749febc-7b8f-4a6b-96e8-2579f281cede" containerID="67352690022e49bcd22a0cdeaa7736e2f45bb4d902f2072638d88527f6878030" exitCode=0 Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.817981 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bfnr" event={"ID":"7749febc-7b8f-4a6b-96e8-2579f281cede","Type":"ContainerDied","Data":"67352690022e49bcd22a0cdeaa7736e2f45bb4d902f2072638d88527f6878030"} Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.820599 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdtx8" event={"ID":"fd9447f0-10ac-4740-a5a6-618b454b89be","Type":"ContainerDied","Data":"524db72c0ebbacfac31fd37fb16916d3fb240500791c38bb576dafdc822e4015"} Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.820638 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdtx8" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.820790 4980 scope.go:117] "RemoveContainer" containerID="ff3c18a1b63200b1a16776893de9f43040969b6fd97c868419f86ff4922623cb" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.829942 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c5kx" event={"ID":"16999f3a-e9cd-449c-bb6f-72b7759cb32e","Type":"ContainerStarted","Data":"8b0b0e2ab6146bd712e164f64f41873325ab4ceb00497991617f55316c214c16"} Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.829979 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56mp9" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.842699 4980 scope.go:117] "RemoveContainer" containerID="87fbbfa7331c44a42fd9933cb012e027e8e1ad4262cade6a0a9575d627a3a720" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.851766 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.853715 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" podStartSLOduration=2.8537040559999998 podStartE2EDuration="2.853704056s" podCreationTimestamp="2026-01-07 03:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:35:57.852827598 +0000 UTC m=+204.418522333" watchObservedRunningTime="2026-01-07 03:35:57.853704056 +0000 UTC m=+204.419398791" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.859546 4980 scope.go:117] "RemoveContainer" containerID="86d13a3e4e66a8875c95ddda84d491355a05b7b39e8f6d149bb511faf7fa6d2d" Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.872258 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdtx8"] Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.884272 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdtx8"] Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.922872 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56mp9"] Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.928682 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-56mp9"] Jan 07 03:35:57 crc kubenswrapper[4980]: I0107 03:35:57.945960 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9c5kx" podStartSLOduration=2.807329948 podStartE2EDuration="56.945938122s" podCreationTimestamp="2026-01-07 03:35:01 +0000 UTC" firstStartedPulling="2026-01-07 03:35:03.163421292 +0000 UTC m=+149.729116027" lastFinishedPulling="2026-01-07 03:35:57.302029466 +0000 UTC m=+203.867724201" observedRunningTime="2026-01-07 03:35:57.944227678 +0000 UTC m=+204.509922413" watchObservedRunningTime="2026-01-07 03:35:57.945938122 +0000 UTC m=+204.511632857" Jan 07 03:35:58 crc kubenswrapper[4980]: I0107 03:35:58.015575 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-42gz5" podStartSLOduration=2.816674812 podStartE2EDuration="58.015525251s" podCreationTimestamp="2026-01-07 03:35:00 +0000 UTC" firstStartedPulling="2026-01-07 03:35:02.072037086 +0000 UTC m=+148.637731821" lastFinishedPulling="2026-01-07 03:35:57.270887525 +0000 UTC m=+203.836582260" observedRunningTime="2026-01-07 03:35:58.014050775 +0000 UTC m=+204.579745520" watchObservedRunningTime="2026-01-07 03:35:58.015525251 +0000 UTC m=+204.581219986" Jan 07 03:35:58 crc kubenswrapper[4980]: I0107 03:35:58.852093 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bfnr" event={"ID":"7749febc-7b8f-4a6b-96e8-2579f281cede","Type":"ContainerStarted","Data":"99797584c3f165e1ad63b5dc4b9af1ddd9d4fe4d4ea094d12343bda6070c895b"} Jan 07 03:35:58 crc kubenswrapper[4980]: I0107 03:35:58.870745 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2bfnr" podStartSLOduration=2.454050855 podStartE2EDuration="59.870728914s" podCreationTimestamp="2026-01-07 03:34:59 +0000 UTC" firstStartedPulling="2026-01-07 03:35:01.055080396 +0000 UTC m=+147.620775131" lastFinishedPulling="2026-01-07 03:35:58.471758445 +0000 UTC m=+205.037453190" observedRunningTime="2026-01-07 03:35:58.870058583 +0000 UTC m=+205.435753318" watchObservedRunningTime="2026-01-07 03:35:58.870728914 +0000 UTC m=+205.436423649" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.432786 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc"] Jan 07 03:35:59 crc kubenswrapper[4980]: E0107 03:35:59.433057 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9447f0-10ac-4740-a5a6-618b454b89be" containerName="extract-content" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.433072 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9447f0-10ac-4740-a5a6-618b454b89be" containerName="extract-content" Jan 07 03:35:59 crc kubenswrapper[4980]: E0107 03:35:59.433089 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46100c6d-7c73-47b9-a30e-6409aabaf9db" containerName="extract-utilities" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.433095 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="46100c6d-7c73-47b9-a30e-6409aabaf9db" containerName="extract-utilities" Jan 07 03:35:59 crc kubenswrapper[4980]: E0107 03:35:59.433108 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9447f0-10ac-4740-a5a6-618b454b89be" containerName="extract-utilities" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.433114 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9447f0-10ac-4740-a5a6-618b454b89be" containerName="extract-utilities" Jan 07 03:35:59 crc kubenswrapper[4980]: E0107 03:35:59.433122 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46100c6d-7c73-47b9-a30e-6409aabaf9db" containerName="registry-server" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.433130 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="46100c6d-7c73-47b9-a30e-6409aabaf9db" containerName="registry-server" Jan 07 03:35:59 crc kubenswrapper[4980]: E0107 03:35:59.433139 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46100c6d-7c73-47b9-a30e-6409aabaf9db" containerName="extract-content" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.433148 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="46100c6d-7c73-47b9-a30e-6409aabaf9db" containerName="extract-content" Jan 07 03:35:59 crc kubenswrapper[4980]: E0107 03:35:59.433164 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9447f0-10ac-4740-a5a6-618b454b89be" containerName="registry-server" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.433170 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9447f0-10ac-4740-a5a6-618b454b89be" containerName="registry-server" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.433268 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9447f0-10ac-4740-a5a6-618b454b89be" containerName="registry-server" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.433280 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="46100c6d-7c73-47b9-a30e-6409aabaf9db" containerName="registry-server" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.433769 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.437989 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.438208 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.438395 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.438526 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.438675 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.438834 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.448302 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc"] Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.612520 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9246cf31-7451-4007-a5d1-b63453a9b612-serving-cert\") pod \"route-controller-manager-7cf74d78f5-hdvgc\" (UID: \"9246cf31-7451-4007-a5d1-b63453a9b612\") " pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.612735 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csp42\" (UniqueName: \"kubernetes.io/projected/9246cf31-7451-4007-a5d1-b63453a9b612-kube-api-access-csp42\") pod \"route-controller-manager-7cf74d78f5-hdvgc\" (UID: \"9246cf31-7451-4007-a5d1-b63453a9b612\") " pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.612813 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9246cf31-7451-4007-a5d1-b63453a9b612-config\") pod \"route-controller-manager-7cf74d78f5-hdvgc\" (UID: \"9246cf31-7451-4007-a5d1-b63453a9b612\") " pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.612838 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9246cf31-7451-4007-a5d1-b63453a9b612-client-ca\") pod \"route-controller-manager-7cf74d78f5-hdvgc\" (UID: \"9246cf31-7451-4007-a5d1-b63453a9b612\") " pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.714286 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9246cf31-7451-4007-a5d1-b63453a9b612-serving-cert\") pod \"route-controller-manager-7cf74d78f5-hdvgc\" (UID: \"9246cf31-7451-4007-a5d1-b63453a9b612\") " pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.714367 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csp42\" (UniqueName: \"kubernetes.io/projected/9246cf31-7451-4007-a5d1-b63453a9b612-kube-api-access-csp42\") pod \"route-controller-manager-7cf74d78f5-hdvgc\" (UID: \"9246cf31-7451-4007-a5d1-b63453a9b612\") " pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.714401 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9246cf31-7451-4007-a5d1-b63453a9b612-config\") pod \"route-controller-manager-7cf74d78f5-hdvgc\" (UID: \"9246cf31-7451-4007-a5d1-b63453a9b612\") " pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.714435 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9246cf31-7451-4007-a5d1-b63453a9b612-client-ca\") pod \"route-controller-manager-7cf74d78f5-hdvgc\" (UID: \"9246cf31-7451-4007-a5d1-b63453a9b612\") " pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.715514 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9246cf31-7451-4007-a5d1-b63453a9b612-client-ca\") pod \"route-controller-manager-7cf74d78f5-hdvgc\" (UID: \"9246cf31-7451-4007-a5d1-b63453a9b612\") " pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.716685 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9246cf31-7451-4007-a5d1-b63453a9b612-config\") pod \"route-controller-manager-7cf74d78f5-hdvgc\" (UID: \"9246cf31-7451-4007-a5d1-b63453a9b612\") " pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.726870 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9246cf31-7451-4007-a5d1-b63453a9b612-serving-cert\") pod \"route-controller-manager-7cf74d78f5-hdvgc\" (UID: \"9246cf31-7451-4007-a5d1-b63453a9b612\") " pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.734598 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csp42\" (UniqueName: \"kubernetes.io/projected/9246cf31-7451-4007-a5d1-b63453a9b612-kube-api-access-csp42\") pod \"route-controller-manager-7cf74d78f5-hdvgc\" (UID: \"9246cf31-7451-4007-a5d1-b63453a9b612\") " pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.742871 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46100c6d-7c73-47b9-a30e-6409aabaf9db" path="/var/lib/kubelet/pods/46100c6d-7c73-47b9-a30e-6409aabaf9db/volumes" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.743511 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd9447f0-10ac-4740-a5a6-618b454b89be" path="/var/lib/kubelet/pods/fd9447f0-10ac-4740-a5a6-618b454b89be/volumes" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.753407 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" Jan 07 03:35:59 crc kubenswrapper[4980]: I0107 03:35:59.996542 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc"] Jan 07 03:36:00 crc kubenswrapper[4980]: I0107 03:36:00.178526 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2bfnr" Jan 07 03:36:00 crc kubenswrapper[4980]: I0107 03:36:00.179156 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2bfnr" Jan 07 03:36:00 crc kubenswrapper[4980]: I0107 03:36:00.568160 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-42gz5" Jan 07 03:36:00 crc kubenswrapper[4980]: I0107 03:36:00.568215 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-42gz5" Jan 07 03:36:00 crc kubenswrapper[4980]: I0107 03:36:00.629533 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-42gz5" Jan 07 03:36:00 crc kubenswrapper[4980]: I0107 03:36:00.881368 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" event={"ID":"9246cf31-7451-4007-a5d1-b63453a9b612","Type":"ContainerStarted","Data":"275c7f9af690e24841f03c26e49148c3b248117c65839f6f6c86eeebbcb96953"} Jan 07 03:36:00 crc kubenswrapper[4980]: I0107 03:36:00.881463 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" event={"ID":"9246cf31-7451-4007-a5d1-b63453a9b612","Type":"ContainerStarted","Data":"d6fc81650e69639b9bad6c77503aa65fd800c744c6044d3f2249691047d94342"} Jan 07 03:36:00 crc kubenswrapper[4980]: I0107 03:36:00.883293 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" Jan 07 03:36:00 crc kubenswrapper[4980]: I0107 03:36:00.911402 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" podStartSLOduration=5.911369325 podStartE2EDuration="5.911369325s" podCreationTimestamp="2026-01-07 03:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:36:00.909714203 +0000 UTC m=+207.475408998" watchObservedRunningTime="2026-01-07 03:36:00.911369325 +0000 UTC m=+207.477064110" Jan 07 03:36:01 crc kubenswrapper[4980]: I0107 03:36:01.124309 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" Jan 07 03:36:01 crc kubenswrapper[4980]: I0107 03:36:01.224729 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-2bfnr" podUID="7749febc-7b8f-4a6b-96e8-2579f281cede" containerName="registry-server" probeResult="failure" output=< Jan 07 03:36:01 crc kubenswrapper[4980]: timeout: failed to connect service ":50051" within 1s Jan 07 03:36:01 crc kubenswrapper[4980]: > Jan 07 03:36:02 crc kubenswrapper[4980]: I0107 03:36:02.156411 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9c5kx" Jan 07 03:36:02 crc kubenswrapper[4980]: I0107 03:36:02.156889 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9c5kx" Jan 07 03:36:02 crc kubenswrapper[4980]: I0107 03:36:02.217631 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9c5kx" Jan 07 03:36:02 crc kubenswrapper[4980]: I0107 03:36:02.954892 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9c5kx" Jan 07 03:36:05 crc kubenswrapper[4980]: I0107 03:36:05.607790 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" podUID="7a9327ea-0e49-46d3-a849-bef3feed4a78" containerName="oauth-openshift" containerID="cri-o://350a41f48157c0cbd60be4fc9756cd31e6c0cff62b6e3f18aa37cfaa520a184f" gracePeriod=15 Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.543531 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.543636 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.543681 4980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.544318 4980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a"} pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.544375 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" containerID="cri-o://d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a" gracePeriod=600 Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.575989 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.727447 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-service-ca\") pod \"7a9327ea-0e49-46d3-a849-bef3feed4a78\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.727860 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-serving-cert\") pod \"7a9327ea-0e49-46d3-a849-bef3feed4a78\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.727903 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-template-provider-selection\") pod \"7a9327ea-0e49-46d3-a849-bef3feed4a78\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.727927 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-router-certs\") pod \"7a9327ea-0e49-46d3-a849-bef3feed4a78\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.727952 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-audit-policies\") pod \"7a9327ea-0e49-46d3-a849-bef3feed4a78\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.727990 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-idp-0-file-data\") pod \"7a9327ea-0e49-46d3-a849-bef3feed4a78\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.728022 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-trusted-ca-bundle\") pod \"7a9327ea-0e49-46d3-a849-bef3feed4a78\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.728046 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-template-login\") pod \"7a9327ea-0e49-46d3-a849-bef3feed4a78\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.728073 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-ocp-branding-template\") pod \"7a9327ea-0e49-46d3-a849-bef3feed4a78\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.728103 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-cliconfig\") pod \"7a9327ea-0e49-46d3-a849-bef3feed4a78\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.728186 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcnwb\" (UniqueName: \"kubernetes.io/projected/7a9327ea-0e49-46d3-a849-bef3feed4a78-kube-api-access-rcnwb\") pod \"7a9327ea-0e49-46d3-a849-bef3feed4a78\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.728223 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a9327ea-0e49-46d3-a849-bef3feed4a78-audit-dir\") pod \"7a9327ea-0e49-46d3-a849-bef3feed4a78\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.728262 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-session\") pod \"7a9327ea-0e49-46d3-a849-bef3feed4a78\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.728307 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-template-error\") pod \"7a9327ea-0e49-46d3-a849-bef3feed4a78\" (UID: \"7a9327ea-0e49-46d3-a849-bef3feed4a78\") " Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.728855 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "7a9327ea-0e49-46d3-a849-bef3feed4a78" (UID: "7a9327ea-0e49-46d3-a849-bef3feed4a78"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.729293 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "7a9327ea-0e49-46d3-a849-bef3feed4a78" (UID: "7a9327ea-0e49-46d3-a849-bef3feed4a78"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.729364 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "7a9327ea-0e49-46d3-a849-bef3feed4a78" (UID: "7a9327ea-0e49-46d3-a849-bef3feed4a78"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.729465 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a9327ea-0e49-46d3-a849-bef3feed4a78-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7a9327ea-0e49-46d3-a849-bef3feed4a78" (UID: "7a9327ea-0e49-46d3-a849-bef3feed4a78"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.729848 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "7a9327ea-0e49-46d3-a849-bef3feed4a78" (UID: "7a9327ea-0e49-46d3-a849-bef3feed4a78"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.734603 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "7a9327ea-0e49-46d3-a849-bef3feed4a78" (UID: "7a9327ea-0e49-46d3-a849-bef3feed4a78"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.741084 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "7a9327ea-0e49-46d3-a849-bef3feed4a78" (UID: "7a9327ea-0e49-46d3-a849-bef3feed4a78"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.741358 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "7a9327ea-0e49-46d3-a849-bef3feed4a78" (UID: "7a9327ea-0e49-46d3-a849-bef3feed4a78"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.741633 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "7a9327ea-0e49-46d3-a849-bef3feed4a78" (UID: "7a9327ea-0e49-46d3-a849-bef3feed4a78"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.741771 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "7a9327ea-0e49-46d3-a849-bef3feed4a78" (UID: "7a9327ea-0e49-46d3-a849-bef3feed4a78"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.741882 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "7a9327ea-0e49-46d3-a849-bef3feed4a78" (UID: "7a9327ea-0e49-46d3-a849-bef3feed4a78"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.742048 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a9327ea-0e49-46d3-a849-bef3feed4a78-kube-api-access-rcnwb" (OuterVolumeSpecName: "kube-api-access-rcnwb") pod "7a9327ea-0e49-46d3-a849-bef3feed4a78" (UID: "7a9327ea-0e49-46d3-a849-bef3feed4a78"). InnerVolumeSpecName "kube-api-access-rcnwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.742072 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "7a9327ea-0e49-46d3-a849-bef3feed4a78" (UID: "7a9327ea-0e49-46d3-a849-bef3feed4a78"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.758684 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "7a9327ea-0e49-46d3-a849-bef3feed4a78" (UID: "7a9327ea-0e49-46d3-a849-bef3feed4a78"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.829938 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.829996 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.830016 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.830036 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.830058 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.830077 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcnwb\" (UniqueName: \"kubernetes.io/projected/7a9327ea-0e49-46d3-a849-bef3feed4a78-kube-api-access-rcnwb\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.830096 4980 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a9327ea-0e49-46d3-a849-bef3feed4a78-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.830113 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.830130 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.830147 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.830164 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.830182 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.830212 4980 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7a9327ea-0e49-46d3-a849-bef3feed4a78-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.830232 4980 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a9327ea-0e49-46d3-a849-bef3feed4a78-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.921476 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerID="d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a" exitCode=0 Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.921595 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerDied","Data":"d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a"} Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.921678 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"735d601985cf5f44c3448b356ab2c68e5af439ea67b0bab54864bd397960c698"} Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.923762 4980 generic.go:334] "Generic (PLEG): container finished" podID="7a9327ea-0e49-46d3-a849-bef3feed4a78" containerID="350a41f48157c0cbd60be4fc9756cd31e6c0cff62b6e3f18aa37cfaa520a184f" exitCode=0 Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.923803 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" event={"ID":"7a9327ea-0e49-46d3-a849-bef3feed4a78","Type":"ContainerDied","Data":"350a41f48157c0cbd60be4fc9756cd31e6c0cff62b6e3f18aa37cfaa520a184f"} Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.923854 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" event={"ID":"7a9327ea-0e49-46d3-a849-bef3feed4a78","Type":"ContainerDied","Data":"1934c386f956929fbbe841b96df5e1a9fbaa70092e6985425dac20e7ed449888"} Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.923881 4980 scope.go:117] "RemoveContainer" containerID="350a41f48157c0cbd60be4fc9756cd31e6c0cff62b6e3f18aa37cfaa520a184f" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.923876 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vtrzt" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.948055 4980 scope.go:117] "RemoveContainer" containerID="350a41f48157c0cbd60be4fc9756cd31e6c0cff62b6e3f18aa37cfaa520a184f" Jan 07 03:36:06 crc kubenswrapper[4980]: E0107 03:36:06.948747 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"350a41f48157c0cbd60be4fc9756cd31e6c0cff62b6e3f18aa37cfaa520a184f\": container with ID starting with 350a41f48157c0cbd60be4fc9756cd31e6c0cff62b6e3f18aa37cfaa520a184f not found: ID does not exist" containerID="350a41f48157c0cbd60be4fc9756cd31e6c0cff62b6e3f18aa37cfaa520a184f" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.948783 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"350a41f48157c0cbd60be4fc9756cd31e6c0cff62b6e3f18aa37cfaa520a184f"} err="failed to get container status \"350a41f48157c0cbd60be4fc9756cd31e6c0cff62b6e3f18aa37cfaa520a184f\": rpc error: code = NotFound desc = could not find container \"350a41f48157c0cbd60be4fc9756cd31e6c0cff62b6e3f18aa37cfaa520a184f\": container with ID starting with 350a41f48157c0cbd60be4fc9756cd31e6c0cff62b6e3f18aa37cfaa520a184f not found: ID does not exist" Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.955294 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vtrzt"] Jan 07 03:36:06 crc kubenswrapper[4980]: I0107 03:36:06.960992 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vtrzt"] Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.446003 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-9689bc7c4-7gf5r"] Jan 07 03:36:07 crc kubenswrapper[4980]: E0107 03:36:07.447207 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9327ea-0e49-46d3-a849-bef3feed4a78" containerName="oauth-openshift" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.447253 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9327ea-0e49-46d3-a849-bef3feed4a78" containerName="oauth-openshift" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.447494 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9327ea-0e49-46d3-a849-bef3feed4a78" containerName="oauth-openshift" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.448546 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.457045 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.457432 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.457686 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.457739 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.459328 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.459423 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.459867 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.459987 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.460162 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.460167 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.460369 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.460930 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.470610 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.476941 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9689bc7c4-7gf5r"] Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.479075 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.488199 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.543973 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ff0f0728-340b-4f9e-962c-2f68206b6e98-audit-dir\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.544073 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-user-template-error\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.544162 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ff0f0728-340b-4f9e-962c-2f68206b6e98-audit-policies\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.544212 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-service-ca\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.544261 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-user-template-login\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.544318 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-session\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.544355 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.544416 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-router-certs\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.544517 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.544676 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.544740 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.544822 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgqhq\" (UniqueName: \"kubernetes.io/projected/ff0f0728-340b-4f9e-962c-2f68206b6e98-kube-api-access-mgqhq\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.544869 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.544955 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.646786 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-session\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.646967 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.647043 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-router-certs\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.647185 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.647275 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.647341 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.647400 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgqhq\" (UniqueName: \"kubernetes.io/projected/ff0f0728-340b-4f9e-962c-2f68206b6e98-kube-api-access-mgqhq\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.647462 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.647674 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.647773 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ff0f0728-340b-4f9e-962c-2f68206b6e98-audit-dir\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.647838 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-user-template-error\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.647992 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ff0f0728-340b-4f9e-962c-2f68206b6e98-audit-policies\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.648078 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-service-ca\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.648146 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-user-template-login\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.650221 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ff0f0728-340b-4f9e-962c-2f68206b6e98-audit-dir\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.651662 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.652239 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.652608 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ff0f0728-340b-4f9e-962c-2f68206b6e98-audit-policies\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.652833 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-service-ca\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.658676 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-user-template-error\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.659426 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-session\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.659959 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.661922 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.662186 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-user-template-login\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.662468 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.665448 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.676537 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ff0f0728-340b-4f9e-962c-2f68206b6e98-v4-0-config-system-router-certs\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.682879 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgqhq\" (UniqueName: \"kubernetes.io/projected/ff0f0728-340b-4f9e-962c-2f68206b6e98-kube-api-access-mgqhq\") pod \"oauth-openshift-9689bc7c4-7gf5r\" (UID: \"ff0f0728-340b-4f9e-962c-2f68206b6e98\") " pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.759088 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a9327ea-0e49-46d3-a849-bef3feed4a78" path="/var/lib/kubelet/pods/7a9327ea-0e49-46d3-a849-bef3feed4a78/volumes" Jan 07 03:36:07 crc kubenswrapper[4980]: I0107 03:36:07.809253 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:08 crc kubenswrapper[4980]: I0107 03:36:08.274120 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9689bc7c4-7gf5r"] Jan 07 03:36:08 crc kubenswrapper[4980]: W0107 03:36:08.280748 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff0f0728_340b_4f9e_962c_2f68206b6e98.slice/crio-e8d691748a036d088c72afc65a2c6ac53d3846273273d6e897038a8bf40d8e11 WatchSource:0}: Error finding container e8d691748a036d088c72afc65a2c6ac53d3846273273d6e897038a8bf40d8e11: Status 404 returned error can't find the container with id e8d691748a036d088c72afc65a2c6ac53d3846273273d6e897038a8bf40d8e11 Jan 07 03:36:08 crc kubenswrapper[4980]: I0107 03:36:08.940664 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" event={"ID":"ff0f0728-340b-4f9e-962c-2f68206b6e98","Type":"ContainerStarted","Data":"e8d691748a036d088c72afc65a2c6ac53d3846273273d6e897038a8bf40d8e11"} Jan 07 03:36:09 crc kubenswrapper[4980]: I0107 03:36:09.950742 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" event={"ID":"ff0f0728-340b-4f9e-962c-2f68206b6e98","Type":"ContainerStarted","Data":"8c7cf7e68af16d48646c91ca9a71975764afe0e3ca1db8c4610e3cd76866c42e"} Jan 07 03:36:09 crc kubenswrapper[4980]: I0107 03:36:09.951163 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:09 crc kubenswrapper[4980]: I0107 03:36:09.987712 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" podStartSLOduration=29.987697009 podStartE2EDuration="29.987697009s" podCreationTimestamp="2026-01-07 03:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:36:09.985677455 +0000 UTC m=+216.551372220" watchObservedRunningTime="2026-01-07 03:36:09.987697009 +0000 UTC m=+216.553391744" Jan 07 03:36:10 crc kubenswrapper[4980]: I0107 03:36:10.128012 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-9689bc7c4-7gf5r" Jan 07 03:36:10 crc kubenswrapper[4980]: I0107 03:36:10.228952 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2bfnr" Jan 07 03:36:10 crc kubenswrapper[4980]: I0107 03:36:10.283143 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2bfnr" Jan 07 03:36:10 crc kubenswrapper[4980]: I0107 03:36:10.634463 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-42gz5" Jan 07 03:36:11 crc kubenswrapper[4980]: I0107 03:36:11.832741 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-42gz5"] Jan 07 03:36:11 crc kubenswrapper[4980]: I0107 03:36:11.834143 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-42gz5" podUID="52350cdf-a327-410e-89cf-6666175b6ddc" containerName="registry-server" containerID="cri-o://3fb83fa1b8b40e0ed5c6c4b4c673051eeb7963aabd4e6ef2fc46d8884ad96791" gracePeriod=2 Jan 07 03:36:11 crc kubenswrapper[4980]: I0107 03:36:11.976000 4980 generic.go:334] "Generic (PLEG): container finished" podID="52350cdf-a327-410e-89cf-6666175b6ddc" containerID="3fb83fa1b8b40e0ed5c6c4b4c673051eeb7963aabd4e6ef2fc46d8884ad96791" exitCode=0 Jan 07 03:36:11 crc kubenswrapper[4980]: I0107 03:36:11.976052 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42gz5" event={"ID":"52350cdf-a327-410e-89cf-6666175b6ddc","Type":"ContainerDied","Data":"3fb83fa1b8b40e0ed5c6c4b4c673051eeb7963aabd4e6ef2fc46d8884ad96791"} Jan 07 03:36:12 crc kubenswrapper[4980]: I0107 03:36:12.382066 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42gz5" Jan 07 03:36:12 crc kubenswrapper[4980]: I0107 03:36:12.526672 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c72lr\" (UniqueName: \"kubernetes.io/projected/52350cdf-a327-410e-89cf-6666175b6ddc-kube-api-access-c72lr\") pod \"52350cdf-a327-410e-89cf-6666175b6ddc\" (UID: \"52350cdf-a327-410e-89cf-6666175b6ddc\") " Jan 07 03:36:12 crc kubenswrapper[4980]: I0107 03:36:12.526755 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52350cdf-a327-410e-89cf-6666175b6ddc-utilities\") pod \"52350cdf-a327-410e-89cf-6666175b6ddc\" (UID: \"52350cdf-a327-410e-89cf-6666175b6ddc\") " Jan 07 03:36:12 crc kubenswrapper[4980]: I0107 03:36:12.526886 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52350cdf-a327-410e-89cf-6666175b6ddc-catalog-content\") pod \"52350cdf-a327-410e-89cf-6666175b6ddc\" (UID: \"52350cdf-a327-410e-89cf-6666175b6ddc\") " Jan 07 03:36:12 crc kubenswrapper[4980]: I0107 03:36:12.528314 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52350cdf-a327-410e-89cf-6666175b6ddc-utilities" (OuterVolumeSpecName: "utilities") pod "52350cdf-a327-410e-89cf-6666175b6ddc" (UID: "52350cdf-a327-410e-89cf-6666175b6ddc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:36:12 crc kubenswrapper[4980]: I0107 03:36:12.536536 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52350cdf-a327-410e-89cf-6666175b6ddc-kube-api-access-c72lr" (OuterVolumeSpecName: "kube-api-access-c72lr") pod "52350cdf-a327-410e-89cf-6666175b6ddc" (UID: "52350cdf-a327-410e-89cf-6666175b6ddc"). InnerVolumeSpecName "kube-api-access-c72lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:36:12 crc kubenswrapper[4980]: I0107 03:36:12.617482 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52350cdf-a327-410e-89cf-6666175b6ddc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52350cdf-a327-410e-89cf-6666175b6ddc" (UID: "52350cdf-a327-410e-89cf-6666175b6ddc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:36:12 crc kubenswrapper[4980]: I0107 03:36:12.630239 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52350cdf-a327-410e-89cf-6666175b6ddc-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:12 crc kubenswrapper[4980]: I0107 03:36:12.630295 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c72lr\" (UniqueName: \"kubernetes.io/projected/52350cdf-a327-410e-89cf-6666175b6ddc-kube-api-access-c72lr\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:12 crc kubenswrapper[4980]: I0107 03:36:12.630314 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52350cdf-a327-410e-89cf-6666175b6ddc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:12 crc kubenswrapper[4980]: I0107 03:36:12.987609 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42gz5" event={"ID":"52350cdf-a327-410e-89cf-6666175b6ddc","Type":"ContainerDied","Data":"cc420900443e6edc5357e944f00f135ab72783abe1763bd95383c1be6e43aadb"} Jan 07 03:36:12 crc kubenswrapper[4980]: I0107 03:36:12.987728 4980 scope.go:117] "RemoveContainer" containerID="3fb83fa1b8b40e0ed5c6c4b4c673051eeb7963aabd4e6ef2fc46d8884ad96791" Jan 07 03:36:12 crc kubenswrapper[4980]: I0107 03:36:12.987731 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42gz5" Jan 07 03:36:13 crc kubenswrapper[4980]: I0107 03:36:13.012201 4980 scope.go:117] "RemoveContainer" containerID="08b52504774cea3b8fe2d4e62e93d02be7267051418980b57b0068e0adffc3c1" Jan 07 03:36:13 crc kubenswrapper[4980]: I0107 03:36:13.041752 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-42gz5"] Jan 07 03:36:13 crc kubenswrapper[4980]: I0107 03:36:13.053303 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-42gz5"] Jan 07 03:36:13 crc kubenswrapper[4980]: I0107 03:36:13.061074 4980 scope.go:117] "RemoveContainer" containerID="c4e6f2555b3b53d674785252d618d956b8bb184968df65f278f94154677cfc7d" Jan 07 03:36:13 crc kubenswrapper[4980]: I0107 03:36:13.749382 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52350cdf-a327-410e-89cf-6666175b6ddc" path="/var/lib/kubelet/pods/52350cdf-a327-410e-89cf-6666175b6ddc/volumes" Jan 07 03:36:15 crc kubenswrapper[4980]: I0107 03:36:15.489710 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9bf884b85-ml7cd"] Jan 07 03:36:15 crc kubenswrapper[4980]: I0107 03:36:15.490113 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" podUID="bb59366e-e7ec-44b4-ae76-ea152401a3c4" containerName="controller-manager" containerID="cri-o://00f72de41c8a12fe7c9fdb5255de29f0e663d062dafc7c294a63116db09aaa5b" gracePeriod=30 Jan 07 03:36:15 crc kubenswrapper[4980]: I0107 03:36:15.602948 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc"] Jan 07 03:36:15 crc kubenswrapper[4980]: I0107 03:36:15.603283 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" podUID="9246cf31-7451-4007-a5d1-b63453a9b612" containerName="route-controller-manager" containerID="cri-o://275c7f9af690e24841f03c26e49148c3b248117c65839f6f6c86eeebbcb96953" gracePeriod=30 Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.016227 4980 generic.go:334] "Generic (PLEG): container finished" podID="9246cf31-7451-4007-a5d1-b63453a9b612" containerID="275c7f9af690e24841f03c26e49148c3b248117c65839f6f6c86eeebbcb96953" exitCode=0 Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.016326 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" event={"ID":"9246cf31-7451-4007-a5d1-b63453a9b612","Type":"ContainerDied","Data":"275c7f9af690e24841f03c26e49148c3b248117c65839f6f6c86eeebbcb96953"} Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.018850 4980 generic.go:334] "Generic (PLEG): container finished" podID="bb59366e-e7ec-44b4-ae76-ea152401a3c4" containerID="00f72de41c8a12fe7c9fdb5255de29f0e663d062dafc7c294a63116db09aaa5b" exitCode=0 Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.018902 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" event={"ID":"bb59366e-e7ec-44b4-ae76-ea152401a3c4","Type":"ContainerDied","Data":"00f72de41c8a12fe7c9fdb5255de29f0e663d062dafc7c294a63116db09aaa5b"} Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.170928 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.176432 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.189674 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb59366e-e7ec-44b4-ae76-ea152401a3c4-config\") pod \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.189754 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9246cf31-7451-4007-a5d1-b63453a9b612-config\") pod \"9246cf31-7451-4007-a5d1-b63453a9b612\" (UID: \"9246cf31-7451-4007-a5d1-b63453a9b612\") " Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.189816 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csp42\" (UniqueName: \"kubernetes.io/projected/9246cf31-7451-4007-a5d1-b63453a9b612-kube-api-access-csp42\") pod \"9246cf31-7451-4007-a5d1-b63453a9b612\" (UID: \"9246cf31-7451-4007-a5d1-b63453a9b612\") " Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.189855 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x4vg\" (UniqueName: \"kubernetes.io/projected/bb59366e-e7ec-44b4-ae76-ea152401a3c4-kube-api-access-2x4vg\") pod \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.189897 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb59366e-e7ec-44b4-ae76-ea152401a3c4-client-ca\") pod \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.189925 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb59366e-e7ec-44b4-ae76-ea152401a3c4-serving-cert\") pod \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.189950 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9246cf31-7451-4007-a5d1-b63453a9b612-client-ca\") pod \"9246cf31-7451-4007-a5d1-b63453a9b612\" (UID: \"9246cf31-7451-4007-a5d1-b63453a9b612\") " Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.189978 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9246cf31-7451-4007-a5d1-b63453a9b612-serving-cert\") pod \"9246cf31-7451-4007-a5d1-b63453a9b612\" (UID: \"9246cf31-7451-4007-a5d1-b63453a9b612\") " Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.190015 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb59366e-e7ec-44b4-ae76-ea152401a3c4-proxy-ca-bundles\") pod \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\" (UID: \"bb59366e-e7ec-44b4-ae76-ea152401a3c4\") " Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.191799 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb59366e-e7ec-44b4-ae76-ea152401a3c4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bb59366e-e7ec-44b4-ae76-ea152401a3c4" (UID: "bb59366e-e7ec-44b4-ae76-ea152401a3c4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.192174 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb59366e-e7ec-44b4-ae76-ea152401a3c4-client-ca" (OuterVolumeSpecName: "client-ca") pod "bb59366e-e7ec-44b4-ae76-ea152401a3c4" (UID: "bb59366e-e7ec-44b4-ae76-ea152401a3c4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.192220 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9246cf31-7451-4007-a5d1-b63453a9b612-client-ca" (OuterVolumeSpecName: "client-ca") pod "9246cf31-7451-4007-a5d1-b63453a9b612" (UID: "9246cf31-7451-4007-a5d1-b63453a9b612"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.192370 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb59366e-e7ec-44b4-ae76-ea152401a3c4-config" (OuterVolumeSpecName: "config") pod "bb59366e-e7ec-44b4-ae76-ea152401a3c4" (UID: "bb59366e-e7ec-44b4-ae76-ea152401a3c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.192463 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9246cf31-7451-4007-a5d1-b63453a9b612-config" (OuterVolumeSpecName: "config") pod "9246cf31-7451-4007-a5d1-b63453a9b612" (UID: "9246cf31-7451-4007-a5d1-b63453a9b612"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.197970 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9246cf31-7451-4007-a5d1-b63453a9b612-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9246cf31-7451-4007-a5d1-b63453a9b612" (UID: "9246cf31-7451-4007-a5d1-b63453a9b612"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.197977 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb59366e-e7ec-44b4-ae76-ea152401a3c4-kube-api-access-2x4vg" (OuterVolumeSpecName: "kube-api-access-2x4vg") pod "bb59366e-e7ec-44b4-ae76-ea152401a3c4" (UID: "bb59366e-e7ec-44b4-ae76-ea152401a3c4"). InnerVolumeSpecName "kube-api-access-2x4vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.199445 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb59366e-e7ec-44b4-ae76-ea152401a3c4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bb59366e-e7ec-44b4-ae76-ea152401a3c4" (UID: "bb59366e-e7ec-44b4-ae76-ea152401a3c4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.200961 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9246cf31-7451-4007-a5d1-b63453a9b612-kube-api-access-csp42" (OuterVolumeSpecName: "kube-api-access-csp42") pod "9246cf31-7451-4007-a5d1-b63453a9b612" (UID: "9246cf31-7451-4007-a5d1-b63453a9b612"). InnerVolumeSpecName "kube-api-access-csp42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.291897 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csp42\" (UniqueName: \"kubernetes.io/projected/9246cf31-7451-4007-a5d1-b63453a9b612-kube-api-access-csp42\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.291960 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x4vg\" (UniqueName: \"kubernetes.io/projected/bb59366e-e7ec-44b4-ae76-ea152401a3c4-kube-api-access-2x4vg\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.291978 4980 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb59366e-e7ec-44b4-ae76-ea152401a3c4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.291994 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb59366e-e7ec-44b4-ae76-ea152401a3c4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.292007 4980 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9246cf31-7451-4007-a5d1-b63453a9b612-client-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.292020 4980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9246cf31-7451-4007-a5d1-b63453a9b612-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.292031 4980 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb59366e-e7ec-44b4-ae76-ea152401a3c4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.292044 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb59366e-e7ec-44b4-ae76-ea152401a3c4-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:16 crc kubenswrapper[4980]: I0107 03:36:16.292056 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9246cf31-7451-4007-a5d1-b63453a9b612-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.030006 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" event={"ID":"bb59366e-e7ec-44b4-ae76-ea152401a3c4","Type":"ContainerDied","Data":"f519f495d4295171203d969c532efc05387d4abe3d40fca59ad496b07dfc7088"} Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.030467 4980 scope.go:117] "RemoveContainer" containerID="00f72de41c8a12fe7c9fdb5255de29f0e663d062dafc7c294a63116db09aaa5b" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.030090 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9bf884b85-ml7cd" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.033608 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" event={"ID":"9246cf31-7451-4007-a5d1-b63453a9b612","Type":"ContainerDied","Data":"d6fc81650e69639b9bad6c77503aa65fd800c744c6044d3f2249691047d94342"} Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.033693 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.063585 4980 scope.go:117] "RemoveContainer" containerID="275c7f9af690e24841f03c26e49148c3b248117c65839f6f6c86eeebbcb96953" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.102481 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9bf884b85-ml7cd"] Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.107362 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-9bf884b85-ml7cd"] Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.112774 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc"] Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.117806 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf74d78f5-hdvgc"] Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.451890 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp"] Jan 07 03:36:17 crc kubenswrapper[4980]: E0107 03:36:17.452240 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52350cdf-a327-410e-89cf-6666175b6ddc" containerName="extract-utilities" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.452260 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="52350cdf-a327-410e-89cf-6666175b6ddc" containerName="extract-utilities" Jan 07 03:36:17 crc kubenswrapper[4980]: E0107 03:36:17.452274 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52350cdf-a327-410e-89cf-6666175b6ddc" containerName="extract-content" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.452280 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="52350cdf-a327-410e-89cf-6666175b6ddc" containerName="extract-content" Jan 07 03:36:17 crc kubenswrapper[4980]: E0107 03:36:17.452290 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9246cf31-7451-4007-a5d1-b63453a9b612" containerName="route-controller-manager" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.452297 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="9246cf31-7451-4007-a5d1-b63453a9b612" containerName="route-controller-manager" Jan 07 03:36:17 crc kubenswrapper[4980]: E0107 03:36:17.452312 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb59366e-e7ec-44b4-ae76-ea152401a3c4" containerName="controller-manager" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.452319 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb59366e-e7ec-44b4-ae76-ea152401a3c4" containerName="controller-manager" Jan 07 03:36:17 crc kubenswrapper[4980]: E0107 03:36:17.452326 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52350cdf-a327-410e-89cf-6666175b6ddc" containerName="registry-server" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.452332 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="52350cdf-a327-410e-89cf-6666175b6ddc" containerName="registry-server" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.452461 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="52350cdf-a327-410e-89cf-6666175b6ddc" containerName="registry-server" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.452476 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="9246cf31-7451-4007-a5d1-b63453a9b612" containerName="route-controller-manager" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.452496 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb59366e-e7ec-44b4-ae76-ea152401a3c4" containerName="controller-manager" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.453039 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.455236 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.455402 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.456042 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.456314 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.456377 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.458248 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.463194 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f56476bdd-85twt"] Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.463856 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.466578 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.466606 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.466819 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.466842 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.466703 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.466897 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.467117 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp"] Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.469700 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f56476bdd-85twt"] Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.477918 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.513317 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb-proxy-ca-bundles\") pod \"controller-manager-5f56476bdd-85twt\" (UID: \"d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb\") " pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.513413 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c26f063-2779-49ca-8707-aee0db4cc7a6-config\") pod \"route-controller-manager-59bbd474d8-rjkvp\" (UID: \"1c26f063-2779-49ca-8707-aee0db4cc7a6\") " pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.513450 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c26f063-2779-49ca-8707-aee0db4cc7a6-serving-cert\") pod \"route-controller-manager-59bbd474d8-rjkvp\" (UID: \"1c26f063-2779-49ca-8707-aee0db4cc7a6\") " pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.513485 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d59jq\" (UniqueName: \"kubernetes.io/projected/d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb-kube-api-access-d59jq\") pod \"controller-manager-5f56476bdd-85twt\" (UID: \"d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb\") " pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.513521 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw94w\" (UniqueName: \"kubernetes.io/projected/1c26f063-2779-49ca-8707-aee0db4cc7a6-kube-api-access-xw94w\") pod \"route-controller-manager-59bbd474d8-rjkvp\" (UID: \"1c26f063-2779-49ca-8707-aee0db4cc7a6\") " pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.513588 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb-client-ca\") pod \"controller-manager-5f56476bdd-85twt\" (UID: \"d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb\") " pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.513629 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb-config\") pod \"controller-manager-5f56476bdd-85twt\" (UID: \"d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb\") " pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.513670 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb-serving-cert\") pod \"controller-manager-5f56476bdd-85twt\" (UID: \"d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb\") " pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.513708 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c26f063-2779-49ca-8707-aee0db4cc7a6-client-ca\") pod \"route-controller-manager-59bbd474d8-rjkvp\" (UID: \"1c26f063-2779-49ca-8707-aee0db4cc7a6\") " pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.615392 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c26f063-2779-49ca-8707-aee0db4cc7a6-config\") pod \"route-controller-manager-59bbd474d8-rjkvp\" (UID: \"1c26f063-2779-49ca-8707-aee0db4cc7a6\") " pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.615453 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c26f063-2779-49ca-8707-aee0db4cc7a6-serving-cert\") pod \"route-controller-manager-59bbd474d8-rjkvp\" (UID: \"1c26f063-2779-49ca-8707-aee0db4cc7a6\") " pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.615494 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d59jq\" (UniqueName: \"kubernetes.io/projected/d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb-kube-api-access-d59jq\") pod \"controller-manager-5f56476bdd-85twt\" (UID: \"d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb\") " pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.615534 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw94w\" (UniqueName: \"kubernetes.io/projected/1c26f063-2779-49ca-8707-aee0db4cc7a6-kube-api-access-xw94w\") pod \"route-controller-manager-59bbd474d8-rjkvp\" (UID: \"1c26f063-2779-49ca-8707-aee0db4cc7a6\") " pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.615632 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb-client-ca\") pod \"controller-manager-5f56476bdd-85twt\" (UID: \"d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb\") " pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.615673 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb-config\") pod \"controller-manager-5f56476bdd-85twt\" (UID: \"d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb\") " pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.615709 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb-serving-cert\") pod \"controller-manager-5f56476bdd-85twt\" (UID: \"d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb\") " pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.615750 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c26f063-2779-49ca-8707-aee0db4cc7a6-client-ca\") pod \"route-controller-manager-59bbd474d8-rjkvp\" (UID: \"1c26f063-2779-49ca-8707-aee0db4cc7a6\") " pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.615891 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb-proxy-ca-bundles\") pod \"controller-manager-5f56476bdd-85twt\" (UID: \"d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb\") " pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.619014 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb-proxy-ca-bundles\") pod \"controller-manager-5f56476bdd-85twt\" (UID: \"d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb\") " pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.619061 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb-client-ca\") pod \"controller-manager-5f56476bdd-85twt\" (UID: \"d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb\") " pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.619399 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c26f063-2779-49ca-8707-aee0db4cc7a6-client-ca\") pod \"route-controller-manager-59bbd474d8-rjkvp\" (UID: \"1c26f063-2779-49ca-8707-aee0db4cc7a6\") " pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.622267 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb-config\") pod \"controller-manager-5f56476bdd-85twt\" (UID: \"d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb\") " pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.623061 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c26f063-2779-49ca-8707-aee0db4cc7a6-config\") pod \"route-controller-manager-59bbd474d8-rjkvp\" (UID: \"1c26f063-2779-49ca-8707-aee0db4cc7a6\") " pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.625169 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c26f063-2779-49ca-8707-aee0db4cc7a6-serving-cert\") pod \"route-controller-manager-59bbd474d8-rjkvp\" (UID: \"1c26f063-2779-49ca-8707-aee0db4cc7a6\") " pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.625712 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb-serving-cert\") pod \"controller-manager-5f56476bdd-85twt\" (UID: \"d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb\") " pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.633333 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw94w\" (UniqueName: \"kubernetes.io/projected/1c26f063-2779-49ca-8707-aee0db4cc7a6-kube-api-access-xw94w\") pod \"route-controller-manager-59bbd474d8-rjkvp\" (UID: \"1c26f063-2779-49ca-8707-aee0db4cc7a6\") " pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.641879 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d59jq\" (UniqueName: \"kubernetes.io/projected/d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb-kube-api-access-d59jq\") pod \"controller-manager-5f56476bdd-85twt\" (UID: \"d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb\") " pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.763449 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9246cf31-7451-4007-a5d1-b63453a9b612" path="/var/lib/kubelet/pods/9246cf31-7451-4007-a5d1-b63453a9b612/volumes" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.766073 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb59366e-e7ec-44b4-ae76-ea152401a3c4" path="/var/lib/kubelet/pods/bb59366e-e7ec-44b4-ae76-ea152401a3c4/volumes" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.793050 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" Jan 07 03:36:17 crc kubenswrapper[4980]: I0107 03:36:17.809746 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:18 crc kubenswrapper[4980]: I0107 03:36:18.064407 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f56476bdd-85twt"] Jan 07 03:36:18 crc kubenswrapper[4980]: I0107 03:36:18.073217 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp"] Jan 07 03:36:18 crc kubenswrapper[4980]: W0107 03:36:18.080209 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c26f063_2779_49ca_8707_aee0db4cc7a6.slice/crio-38165053739a30bdef41c2e9d73c289d56c104d3139e5412087abb9a06d69b5d WatchSource:0}: Error finding container 38165053739a30bdef41c2e9d73c289d56c104d3139e5412087abb9a06d69b5d: Status 404 returned error can't find the container with id 38165053739a30bdef41c2e9d73c289d56c104d3139e5412087abb9a06d69b5d Jan 07 03:36:19 crc kubenswrapper[4980]: I0107 03:36:19.052459 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" event={"ID":"1c26f063-2779-49ca-8707-aee0db4cc7a6","Type":"ContainerStarted","Data":"ebacfd6d708aca9beef5a06ee74ee6945e3362d952667e158a1de6da1c94bb25"} Jan 07 03:36:19 crc kubenswrapper[4980]: I0107 03:36:19.052951 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" event={"ID":"1c26f063-2779-49ca-8707-aee0db4cc7a6","Type":"ContainerStarted","Data":"38165053739a30bdef41c2e9d73c289d56c104d3139e5412087abb9a06d69b5d"} Jan 07 03:36:19 crc kubenswrapper[4980]: I0107 03:36:19.052987 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" Jan 07 03:36:19 crc kubenswrapper[4980]: I0107 03:36:19.054767 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" event={"ID":"d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb","Type":"ContainerStarted","Data":"1ace28b5ca065fb5f7c6d7494f7e77234384190631695720f89f3dfd5a44ad33"} Jan 07 03:36:19 crc kubenswrapper[4980]: I0107 03:36:19.054833 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" event={"ID":"d659b9fb-8daa-4a99-9ab7-cf59c08cf8cb","Type":"ContainerStarted","Data":"60022b5c5d67ab5c31208ea23fb232182fff801742ad60a7d7b8f941cabb4759"} Jan 07 03:36:19 crc kubenswrapper[4980]: I0107 03:36:19.055064 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:19 crc kubenswrapper[4980]: I0107 03:36:19.061192 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" Jan 07 03:36:19 crc kubenswrapper[4980]: I0107 03:36:19.061882 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" Jan 07 03:36:19 crc kubenswrapper[4980]: I0107 03:36:19.081235 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59bbd474d8-rjkvp" podStartSLOduration=4.081213977 podStartE2EDuration="4.081213977s" podCreationTimestamp="2026-01-07 03:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:36:19.07680579 +0000 UTC m=+225.642500555" watchObservedRunningTime="2026-01-07 03:36:19.081213977 +0000 UTC m=+225.646908742" Jan 07 03:36:19 crc kubenswrapper[4980]: I0107 03:36:19.157921 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f56476bdd-85twt" podStartSLOduration=4.157905128 podStartE2EDuration="4.157905128s" podCreationTimestamp="2026-01-07 03:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:36:19.157195836 +0000 UTC m=+225.722890611" watchObservedRunningTime="2026-01-07 03:36:19.157905128 +0000 UTC m=+225.723599863" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.608913 4980 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.609857 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.610673 4980 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.610955 4980 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.610957 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883" gracePeriod=15 Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.610958 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d" gracePeriod=15 Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.611000 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff" gracePeriod=15 Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.611080 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7" gracePeriod=15 Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.611086 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35" gracePeriod=15 Jan 07 03:36:25 crc kubenswrapper[4980]: E0107 03:36:25.611588 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.611679 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 07 03:36:25 crc kubenswrapper[4980]: E0107 03:36:25.611759 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.611818 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 07 03:36:25 crc kubenswrapper[4980]: E0107 03:36:25.611877 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.611935 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 07 03:36:25 crc kubenswrapper[4980]: E0107 03:36:25.612002 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.612066 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 07 03:36:25 crc kubenswrapper[4980]: E0107 03:36:25.612129 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.612190 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 07 03:36:25 crc kubenswrapper[4980]: E0107 03:36:25.612256 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.612316 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 07 03:36:25 crc kubenswrapper[4980]: E0107 03:36:25.612381 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.612645 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.614222 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.614311 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.614388 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.614457 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.614529 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.614623 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.660018 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.660063 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.660081 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.660102 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.660219 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.660287 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.660364 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.660528 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.761518 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.761583 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.761621 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.761621 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.761641 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.761679 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.761746 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.761785 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.761845 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.761868 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.761851 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.761891 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.762097 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.762215 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.771472 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:25 crc kubenswrapper[4980]: I0107 03:36:25.771676 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:26 crc kubenswrapper[4980]: I0107 03:36:26.100786 4980 generic.go:334] "Generic (PLEG): container finished" podID="9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af" containerID="3f1ebfbb6efecc52be8acf28de15d7eb285296a226342e9f1423990a913d1c5f" exitCode=0 Jan 07 03:36:26 crc kubenswrapper[4980]: I0107 03:36:26.100859 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af","Type":"ContainerDied","Data":"3f1ebfbb6efecc52be8acf28de15d7eb285296a226342e9f1423990a913d1c5f"} Jan 07 03:36:26 crc kubenswrapper[4980]: I0107 03:36:26.101792 4980 status_manager.go:851] "Failed to get status for pod" podUID="9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Jan 07 03:36:26 crc kubenswrapper[4980]: I0107 03:36:26.119198 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 07 03:36:26 crc kubenswrapper[4980]: I0107 03:36:26.122773 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 07 03:36:26 crc kubenswrapper[4980]: I0107 03:36:26.126244 4980 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d" exitCode=0 Jan 07 03:36:26 crc kubenswrapper[4980]: I0107 03:36:26.126263 4980 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff" exitCode=0 Jan 07 03:36:26 crc kubenswrapper[4980]: I0107 03:36:26.126272 4980 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7" exitCode=0 Jan 07 03:36:26 crc kubenswrapper[4980]: I0107 03:36:26.126281 4980 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35" exitCode=2 Jan 07 03:36:26 crc kubenswrapper[4980]: I0107 03:36:26.126319 4980 scope.go:117] "RemoveContainer" containerID="7ebd62625fdd75189729f7db3bce70cc148851972cea358e98dbe8c2d81b17b0" Jan 07 03:36:27 crc kubenswrapper[4980]: I0107 03:36:27.134312 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 07 03:36:27 crc kubenswrapper[4980]: I0107 03:36:27.556806 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 07 03:36:27 crc kubenswrapper[4980]: I0107 03:36:27.559006 4980 status_manager.go:851] "Failed to get status for pod" podUID="9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Jan 07 03:36:27 crc kubenswrapper[4980]: I0107 03:36:27.694163 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af-kubelet-dir\") pod \"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af\" (UID: \"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af\") " Jan 07 03:36:27 crc kubenswrapper[4980]: I0107 03:36:27.694332 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af-var-lock\") pod \"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af\" (UID: \"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af\") " Jan 07 03:36:27 crc kubenswrapper[4980]: I0107 03:36:27.694359 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af-kube-api-access\") pod \"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af\" (UID: \"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af\") " Jan 07 03:36:27 crc kubenswrapper[4980]: I0107 03:36:27.695767 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af-var-lock" (OuterVolumeSpecName: "var-lock") pod "9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af" (UID: "9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:36:27 crc kubenswrapper[4980]: I0107 03:36:27.698513 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af" (UID: "9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:36:27 crc kubenswrapper[4980]: I0107 03:36:27.732399 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af" (UID: "9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:36:27 crc kubenswrapper[4980]: E0107 03:36:27.738783 4980 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Jan 07 03:36:27 crc kubenswrapper[4980]: E0107 03:36:27.739283 4980 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Jan 07 03:36:27 crc kubenswrapper[4980]: E0107 03:36:27.739635 4980 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Jan 07 03:36:27 crc kubenswrapper[4980]: E0107 03:36:27.739839 4980 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Jan 07 03:36:27 crc kubenswrapper[4980]: E0107 03:36:27.740007 4980 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Jan 07 03:36:27 crc kubenswrapper[4980]: I0107 03:36:27.740029 4980 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 07 03:36:27 crc kubenswrapper[4980]: E0107 03:36:27.740167 4980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="200ms" Jan 07 03:36:27 crc kubenswrapper[4980]: I0107 03:36:27.796443 4980 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:27 crc kubenswrapper[4980]: I0107 03:36:27.796491 4980 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af-var-lock\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:27 crc kubenswrapper[4980]: I0107 03:36:27.796504 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:27 crc kubenswrapper[4980]: E0107 03:36:27.941017 4980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="400ms" Jan 07 03:36:27 crc kubenswrapper[4980]: I0107 03:36:27.970471 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 07 03:36:27 crc kubenswrapper[4980]: I0107 03:36:27.971227 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:27 crc kubenswrapper[4980]: I0107 03:36:27.971996 4980 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Jan 07 03:36:27 crc kubenswrapper[4980]: I0107 03:36:27.972695 4980 status_manager.go:851] "Failed to get status for pod" podUID="9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.101535 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.101654 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.101728 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.102377 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.102423 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.102451 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.144161 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af","Type":"ContainerDied","Data":"4cc4c8076ef2368f5f7436e22dee001fa3d5189fdbb32df42d3d2934e956ae13"} Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.144230 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.144286 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cc4c8076ef2368f5f7436e22dee001fa3d5189fdbb32df42d3d2934e956ae13" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.149911 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.150733 4980 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883" exitCode=0 Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.150838 4980 scope.go:117] "RemoveContainer" containerID="31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.151009 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.153164 4980 status_manager.go:851] "Failed to get status for pod" podUID="9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.154134 4980 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.174695 4980 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.175272 4980 status_manager.go:851] "Failed to get status for pod" podUID="9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.175784 4980 scope.go:117] "RemoveContainer" containerID="749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.191371 4980 scope.go:117] "RemoveContainer" containerID="86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.203867 4980 scope.go:117] "RemoveContainer" containerID="fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.204443 4980 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.204486 4980 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.204507 4980 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.222088 4980 scope.go:117] "RemoveContainer" containerID="acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.238689 4980 scope.go:117] "RemoveContainer" containerID="a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.258621 4980 scope.go:117] "RemoveContainer" containerID="31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d" Jan 07 03:36:28 crc kubenswrapper[4980]: E0107 03:36:28.258995 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\": container with ID starting with 31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d not found: ID does not exist" containerID="31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.259051 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d"} err="failed to get container status \"31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\": rpc error: code = NotFound desc = could not find container \"31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d\": container with ID starting with 31a718bd6f59eb12236ad751f3ea7ff5334d14ccaa0aba92b431033514bdc86d not found: ID does not exist" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.259089 4980 scope.go:117] "RemoveContainer" containerID="749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff" Jan 07 03:36:28 crc kubenswrapper[4980]: E0107 03:36:28.259381 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\": container with ID starting with 749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff not found: ID does not exist" containerID="749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.259432 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff"} err="failed to get container status \"749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\": rpc error: code = NotFound desc = could not find container \"749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff\": container with ID starting with 749d9fdabec26bbfb163f07750c69fdc54608686ba50924dfc1b5145b02dc8ff not found: ID does not exist" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.259461 4980 scope.go:117] "RemoveContainer" containerID="86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7" Jan 07 03:36:28 crc kubenswrapper[4980]: E0107 03:36:28.259742 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\": container with ID starting with 86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7 not found: ID does not exist" containerID="86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.259784 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7"} err="failed to get container status \"86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\": rpc error: code = NotFound desc = could not find container \"86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7\": container with ID starting with 86b31883ec748f5b7dd01f52eb2c03c7e8aa2fb26a0477ea724ffa9db6b442c7 not found: ID does not exist" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.259812 4980 scope.go:117] "RemoveContainer" containerID="fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35" Jan 07 03:36:28 crc kubenswrapper[4980]: E0107 03:36:28.260087 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\": container with ID starting with fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35 not found: ID does not exist" containerID="fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.260120 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35"} err="failed to get container status \"fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\": rpc error: code = NotFound desc = could not find container \"fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35\": container with ID starting with fbaddd891397ee3f1bc6fd74f42e579b52d2601c0c5516ec7376c4008441cf35 not found: ID does not exist" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.260143 4980 scope.go:117] "RemoveContainer" containerID="acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883" Jan 07 03:36:28 crc kubenswrapper[4980]: E0107 03:36:28.260412 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\": container with ID starting with acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883 not found: ID does not exist" containerID="acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.260451 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883"} err="failed to get container status \"acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\": rpc error: code = NotFound desc = could not find container \"acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883\": container with ID starting with acbf28e001490394ff53acecbeb86b97825e0594440db29578890d948dfec883 not found: ID does not exist" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.260476 4980 scope.go:117] "RemoveContainer" containerID="a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808" Jan 07 03:36:28 crc kubenswrapper[4980]: E0107 03:36:28.260800 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\": container with ID starting with a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808 not found: ID does not exist" containerID="a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808" Jan 07 03:36:28 crc kubenswrapper[4980]: I0107 03:36:28.260832 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808"} err="failed to get container status \"a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\": rpc error: code = NotFound desc = could not find container \"a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808\": container with ID starting with a3bb91e4e21256eec78bd1a2da200eed3c1789d9d89b60beee477f8c98d08808 not found: ID does not exist" Jan 07 03:36:28 crc kubenswrapper[4980]: E0107 03:36:28.342864 4980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="800ms" Jan 07 03:36:29 crc kubenswrapper[4980]: E0107 03:36:29.143767 4980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="1.6s" Jan 07 03:36:29 crc kubenswrapper[4980]: I0107 03:36:29.747667 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 07 03:36:30 crc kubenswrapper[4980]: E0107 03:36:30.637158 4980 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:30 crc kubenswrapper[4980]: I0107 03:36:30.638200 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:30 crc kubenswrapper[4980]: W0107 03:36:30.664581 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-e09fb3244520d9dee9be1458a899ca40c488f5f6042f5e77cace97bd14db02ec WatchSource:0}: Error finding container e09fb3244520d9dee9be1458a899ca40c488f5f6042f5e77cace97bd14db02ec: Status 404 returned error can't find the container with id e09fb3244520d9dee9be1458a899ca40c488f5f6042f5e77cace97bd14db02ec Jan 07 03:36:30 crc kubenswrapper[4980]: E0107 03:36:30.668023 4980 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.65:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1888559df5eeed35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 03:36:30.667222325 +0000 UTC m=+237.232917090,LastTimestamp:2026-01-07 03:36:30.667222325 +0000 UTC m=+237.232917090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 03:36:30 crc kubenswrapper[4980]: E0107 03:36:30.744956 4980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="3.2s" Jan 07 03:36:31 crc kubenswrapper[4980]: I0107 03:36:31.170325 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e05b40a0ed15f46f07a9b48867556942ce5b5da887ee39a080d8601f05080fd9"} Jan 07 03:36:31 crc kubenswrapper[4980]: I0107 03:36:31.170867 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e09fb3244520d9dee9be1458a899ca40c488f5f6042f5e77cace97bd14db02ec"} Jan 07 03:36:31 crc kubenswrapper[4980]: E0107 03:36:31.171755 4980 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:36:31 crc kubenswrapper[4980]: I0107 03:36:31.171961 4980 status_manager.go:851] "Failed to get status for pod" podUID="9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Jan 07 03:36:33 crc kubenswrapper[4980]: I0107 03:36:33.737909 4980 status_manager.go:851] "Failed to get status for pod" podUID="9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Jan 07 03:36:33 crc kubenswrapper[4980]: E0107 03:36:33.945804 4980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="6.4s" Jan 07 03:36:37 crc kubenswrapper[4980]: I0107 03:36:37.735733 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:37 crc kubenswrapper[4980]: I0107 03:36:37.738127 4980 status_manager.go:851] "Failed to get status for pod" podUID="9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Jan 07 03:36:37 crc kubenswrapper[4980]: I0107 03:36:37.760644 4980 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d8350df-0578-466d-a505-54f93e6365e1" Jan 07 03:36:37 crc kubenswrapper[4980]: I0107 03:36:37.760699 4980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d8350df-0578-466d-a505-54f93e6365e1" Jan 07 03:36:37 crc kubenswrapper[4980]: E0107 03:36:37.761381 4980 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:37 crc kubenswrapper[4980]: I0107 03:36:37.762267 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:38 crc kubenswrapper[4980]: E0107 03:36:38.214228 4980 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.65:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1888559df5eeed35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 03:36:30.667222325 +0000 UTC m=+237.232917090,LastTimestamp:2026-01-07 03:36:30.667222325 +0000 UTC m=+237.232917090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 03:36:38 crc kubenswrapper[4980]: I0107 03:36:38.400164 4980 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="a472423dfddc0ec3ca96fd9d7c17bbd6d470fea7548c2fc7f0cfc05efd9b9410" exitCode=0 Jan 07 03:36:38 crc kubenswrapper[4980]: I0107 03:36:38.400309 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"a472423dfddc0ec3ca96fd9d7c17bbd6d470fea7548c2fc7f0cfc05efd9b9410"} Jan 07 03:36:38 crc kubenswrapper[4980]: I0107 03:36:38.400646 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6f3322258a347914b8bee9680cef9d613ecb24feabc1a8a276f2a17b7ee33cde"} Jan 07 03:36:38 crc kubenswrapper[4980]: I0107 03:36:38.401164 4980 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d8350df-0578-466d-a505-54f93e6365e1" Jan 07 03:36:38 crc kubenswrapper[4980]: I0107 03:36:38.401206 4980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d8350df-0578-466d-a505-54f93e6365e1" Jan 07 03:36:38 crc kubenswrapper[4980]: E0107 03:36:38.401874 4980 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:38 crc kubenswrapper[4980]: I0107 03:36:38.401842 4980 status_manager.go:851] "Failed to get status for pod" podUID="9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Jan 07 03:36:39 crc kubenswrapper[4980]: I0107 03:36:39.418234 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"af718ac83efd8b122b28223b4fc610df0193b255474b16013eede8f2f3cfe089"} Jan 07 03:36:39 crc kubenswrapper[4980]: I0107 03:36:39.418286 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"18f7888bcc37dd67d649c7523ec19141dba72aadce706705c6f669f0ae93cae4"} Jan 07 03:36:40 crc kubenswrapper[4980]: I0107 03:36:40.430132 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"70f797242cbc751a4685c54cd1be93e7d43597888fb5c169352b439633b7311f"} Jan 07 03:36:40 crc kubenswrapper[4980]: I0107 03:36:40.430535 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:40 crc kubenswrapper[4980]: I0107 03:36:40.430573 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a8bb74076d721c2b6e4df471c7970c001b91b106f4ae897b7624cc17e5aa76a6"} Jan 07 03:36:40 crc kubenswrapper[4980]: I0107 03:36:40.430588 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4adaf2e2b4c21c209c5b79dea5aa9595374bc96964fee80fb5fd6a97848a4605"} Jan 07 03:36:40 crc kubenswrapper[4980]: I0107 03:36:40.430374 4980 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d8350df-0578-466d-a505-54f93e6365e1" Jan 07 03:36:40 crc kubenswrapper[4980]: I0107 03:36:40.430614 4980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d8350df-0578-466d-a505-54f93e6365e1" Jan 07 03:36:40 crc kubenswrapper[4980]: I0107 03:36:40.432897 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 07 03:36:40 crc kubenswrapper[4980]: I0107 03:36:40.432945 4980 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce" exitCode=1 Jan 07 03:36:40 crc kubenswrapper[4980]: I0107 03:36:40.432977 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce"} Jan 07 03:36:40 crc kubenswrapper[4980]: I0107 03:36:40.433430 4980 scope.go:117] "RemoveContainer" containerID="cc5b5d5f3c3140c9571e75e5ab6275b894a3b61ff307ef0996f731d40bc542ce" Jan 07 03:36:41 crc kubenswrapper[4980]: I0107 03:36:41.339188 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:36:41 crc kubenswrapper[4980]: I0107 03:36:41.442299 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 07 03:36:41 crc kubenswrapper[4980]: I0107 03:36:41.442384 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"48ab52794ff87953c2673d936d5f9d3d5565a48fc0678738fd9f5f9b15526437"} Jan 07 03:36:42 crc kubenswrapper[4980]: I0107 03:36:42.763344 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:42 crc kubenswrapper[4980]: I0107 03:36:42.763514 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:42 crc kubenswrapper[4980]: I0107 03:36:42.770344 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:44 crc kubenswrapper[4980]: I0107 03:36:44.105631 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:36:45 crc kubenswrapper[4980]: I0107 03:36:45.443337 4980 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:45 crc kubenswrapper[4980]: I0107 03:36:45.549441 4980 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5401683d-359e-4091-b68e-bf20ee64c946" Jan 07 03:36:46 crc kubenswrapper[4980]: I0107 03:36:46.481319 4980 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d8350df-0578-466d-a505-54f93e6365e1" Jan 07 03:36:46 crc kubenswrapper[4980]: I0107 03:36:46.481372 4980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d8350df-0578-466d-a505-54f93e6365e1" Jan 07 03:36:46 crc kubenswrapper[4980]: I0107 03:36:46.486915 4980 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5401683d-359e-4091-b68e-bf20ee64c946" Jan 07 03:36:46 crc kubenswrapper[4980]: I0107 03:36:46.489341 4980 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://18f7888bcc37dd67d649c7523ec19141dba72aadce706705c6f669f0ae93cae4" Jan 07 03:36:46 crc kubenswrapper[4980]: I0107 03:36:46.489598 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:47 crc kubenswrapper[4980]: I0107 03:36:47.488442 4980 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d8350df-0578-466d-a505-54f93e6365e1" Jan 07 03:36:47 crc kubenswrapper[4980]: I0107 03:36:47.488497 4980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d8350df-0578-466d-a505-54f93e6365e1" Jan 07 03:36:47 crc kubenswrapper[4980]: I0107 03:36:47.493379 4980 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5401683d-359e-4091-b68e-bf20ee64c946" Jan 07 03:36:51 crc kubenswrapper[4980]: I0107 03:36:51.339430 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:36:51 crc kubenswrapper[4980]: I0107 03:36:51.347647 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:36:51 crc kubenswrapper[4980]: I0107 03:36:51.523435 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 03:36:54 crc kubenswrapper[4980]: I0107 03:36:54.906405 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 07 03:36:55 crc kubenswrapper[4980]: I0107 03:36:55.772413 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 07 03:36:56 crc kubenswrapper[4980]: I0107 03:36:56.092413 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 07 03:36:56 crc kubenswrapper[4980]: I0107 03:36:56.411408 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 07 03:36:56 crc kubenswrapper[4980]: I0107 03:36:56.573648 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 07 03:36:56 crc kubenswrapper[4980]: I0107 03:36:56.614729 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 07 03:36:56 crc kubenswrapper[4980]: I0107 03:36:56.647018 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 07 03:36:57 crc kubenswrapper[4980]: I0107 03:36:57.314506 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 07 03:36:57 crc kubenswrapper[4980]: I0107 03:36:57.389895 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 07 03:36:57 crc kubenswrapper[4980]: I0107 03:36:57.556621 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 07 03:36:57 crc kubenswrapper[4980]: I0107 03:36:57.627960 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 07 03:36:58 crc kubenswrapper[4980]: I0107 03:36:58.005044 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 07 03:36:58 crc kubenswrapper[4980]: I0107 03:36:58.108144 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 07 03:36:58 crc kubenswrapper[4980]: I0107 03:36:58.128593 4980 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 07 03:36:58 crc kubenswrapper[4980]: I0107 03:36:58.136147 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 07 03:36:58 crc kubenswrapper[4980]: I0107 03:36:58.136240 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 07 03:36:58 crc kubenswrapper[4980]: I0107 03:36:58.142936 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 07 03:36:58 crc kubenswrapper[4980]: I0107 03:36:58.146250 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 03:36:58 crc kubenswrapper[4980]: I0107 03:36:58.168131 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.168109044 podStartE2EDuration="13.168109044s" podCreationTimestamp="2026-01-07 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:36:58.165207066 +0000 UTC m=+264.730901831" watchObservedRunningTime="2026-01-07 03:36:58.168109044 +0000 UTC m=+264.733803819" Jan 07 03:36:58 crc kubenswrapper[4980]: I0107 03:36:58.196069 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 07 03:36:58 crc kubenswrapper[4980]: I0107 03:36:58.385024 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 07 03:36:58 crc kubenswrapper[4980]: I0107 03:36:58.481173 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 07 03:36:58 crc kubenswrapper[4980]: I0107 03:36:58.546360 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 07 03:36:58 crc kubenswrapper[4980]: I0107 03:36:58.693651 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 07 03:36:58 crc kubenswrapper[4980]: I0107 03:36:58.761506 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 07 03:36:58 crc kubenswrapper[4980]: I0107 03:36:58.861241 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 07 03:36:58 crc kubenswrapper[4980]: I0107 03:36:58.890725 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 07 03:36:58 crc kubenswrapper[4980]: I0107 03:36:58.979000 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 07 03:36:59 crc kubenswrapper[4980]: I0107 03:36:59.024841 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 07 03:36:59 crc kubenswrapper[4980]: I0107 03:36:59.027879 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 07 03:36:59 crc kubenswrapper[4980]: I0107 03:36:59.096203 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 07 03:36:59 crc kubenswrapper[4980]: I0107 03:36:59.096435 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 07 03:36:59 crc kubenswrapper[4980]: I0107 03:36:59.214012 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 07 03:36:59 crc kubenswrapper[4980]: I0107 03:36:59.268593 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 07 03:36:59 crc kubenswrapper[4980]: I0107 03:36:59.306864 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 07 03:36:59 crc kubenswrapper[4980]: I0107 03:36:59.360721 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 07 03:36:59 crc kubenswrapper[4980]: I0107 03:36:59.385472 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 07 03:36:59 crc kubenswrapper[4980]: I0107 03:36:59.385984 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 07 03:36:59 crc kubenswrapper[4980]: I0107 03:36:59.449587 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 07 03:36:59 crc kubenswrapper[4980]: I0107 03:36:59.480183 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 07 03:36:59 crc kubenswrapper[4980]: I0107 03:36:59.572954 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 07 03:36:59 crc kubenswrapper[4980]: I0107 03:36:59.685473 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 07 03:36:59 crc kubenswrapper[4980]: I0107 03:36:59.765669 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 07 03:36:59 crc kubenswrapper[4980]: I0107 03:36:59.809757 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 07 03:36:59 crc kubenswrapper[4980]: I0107 03:36:59.900960 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 07 03:36:59 crc kubenswrapper[4980]: I0107 03:36:59.934623 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 07 03:37:00 crc kubenswrapper[4980]: I0107 03:37:00.136149 4980 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 07 03:37:00 crc kubenswrapper[4980]: I0107 03:37:00.152679 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 07 03:37:00 crc kubenswrapper[4980]: I0107 03:37:00.223415 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 07 03:37:00 crc kubenswrapper[4980]: I0107 03:37:00.598894 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 07 03:37:00 crc kubenswrapper[4980]: I0107 03:37:00.630204 4980 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 07 03:37:00 crc kubenswrapper[4980]: I0107 03:37:00.745628 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 07 03:37:00 crc kubenswrapper[4980]: I0107 03:37:00.853917 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 07 03:37:00 crc kubenswrapper[4980]: I0107 03:37:00.857133 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 07 03:37:00 crc kubenswrapper[4980]: I0107 03:37:00.885048 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 07 03:37:00 crc kubenswrapper[4980]: I0107 03:37:00.978573 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 07 03:37:01 crc kubenswrapper[4980]: I0107 03:37:01.001006 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 07 03:37:01 crc kubenswrapper[4980]: I0107 03:37:01.129550 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 07 03:37:01 crc kubenswrapper[4980]: I0107 03:37:01.193127 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 07 03:37:01 crc kubenswrapper[4980]: I0107 03:37:01.328152 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 07 03:37:01 crc kubenswrapper[4980]: I0107 03:37:01.335598 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 07 03:37:01 crc kubenswrapper[4980]: I0107 03:37:01.402976 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 07 03:37:01 crc kubenswrapper[4980]: I0107 03:37:01.413443 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 07 03:37:01 crc kubenswrapper[4980]: I0107 03:37:01.493904 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 07 03:37:01 crc kubenswrapper[4980]: I0107 03:37:01.527070 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 07 03:37:01 crc kubenswrapper[4980]: I0107 03:37:01.608776 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 07 03:37:01 crc kubenswrapper[4980]: I0107 03:37:01.614668 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 07 03:37:01 crc kubenswrapper[4980]: I0107 03:37:01.638440 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 07 03:37:01 crc kubenswrapper[4980]: I0107 03:37:01.871594 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 07 03:37:01 crc kubenswrapper[4980]: I0107 03:37:01.896989 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 07 03:37:01 crc kubenswrapper[4980]: I0107 03:37:01.940357 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.011348 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.081735 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.107452 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.107540 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.206978 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.228499 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.263415 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.282210 4980 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.377660 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.391374 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.446232 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.451849 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.539719 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.540052 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.541018 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.621433 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.682807 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.721093 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.825727 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.826874 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.833811 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 07 03:37:02 crc kubenswrapper[4980]: I0107 03:37:02.964189 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 07 03:37:03 crc kubenswrapper[4980]: I0107 03:37:03.063424 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 07 03:37:03 crc kubenswrapper[4980]: I0107 03:37:03.139375 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 07 03:37:03 crc kubenswrapper[4980]: I0107 03:37:03.144710 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 07 03:37:03 crc kubenswrapper[4980]: I0107 03:37:03.162338 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 07 03:37:03 crc kubenswrapper[4980]: I0107 03:37:03.363133 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 07 03:37:03 crc kubenswrapper[4980]: I0107 03:37:03.481600 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 07 03:37:03 crc kubenswrapper[4980]: I0107 03:37:03.507972 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 07 03:37:03 crc kubenswrapper[4980]: I0107 03:37:03.548888 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 07 03:37:03 crc kubenswrapper[4980]: I0107 03:37:03.585682 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 07 03:37:03 crc kubenswrapper[4980]: I0107 03:37:03.688116 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 07 03:37:03 crc kubenswrapper[4980]: I0107 03:37:03.760796 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 07 03:37:03 crc kubenswrapper[4980]: I0107 03:37:03.776504 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 07 03:37:03 crc kubenswrapper[4980]: I0107 03:37:03.790370 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 07 03:37:03 crc kubenswrapper[4980]: I0107 03:37:03.842737 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 07 03:37:03 crc kubenswrapper[4980]: I0107 03:37:03.933354 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 07 03:37:03 crc kubenswrapper[4980]: I0107 03:37:03.969942 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 07 03:37:04 crc kubenswrapper[4980]: I0107 03:37:04.026048 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 07 03:37:04 crc kubenswrapper[4980]: I0107 03:37:04.095655 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 07 03:37:04 crc kubenswrapper[4980]: I0107 03:37:04.146170 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 07 03:37:04 crc kubenswrapper[4980]: I0107 03:37:04.342401 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 07 03:37:04 crc kubenswrapper[4980]: I0107 03:37:04.388055 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 07 03:37:04 crc kubenswrapper[4980]: I0107 03:37:04.410269 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 07 03:37:04 crc kubenswrapper[4980]: I0107 03:37:04.433894 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 07 03:37:04 crc kubenswrapper[4980]: I0107 03:37:04.457368 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 07 03:37:04 crc kubenswrapper[4980]: I0107 03:37:04.496749 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 07 03:37:04 crc kubenswrapper[4980]: I0107 03:37:04.561618 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 07 03:37:04 crc kubenswrapper[4980]: I0107 03:37:04.571217 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 07 03:37:04 crc kubenswrapper[4980]: I0107 03:37:04.619468 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 07 03:37:04 crc kubenswrapper[4980]: I0107 03:37:04.651104 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 07 03:37:04 crc kubenswrapper[4980]: I0107 03:37:04.688412 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 07 03:37:04 crc kubenswrapper[4980]: I0107 03:37:04.688512 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 07 03:37:04 crc kubenswrapper[4980]: I0107 03:37:04.708842 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 07 03:37:04 crc kubenswrapper[4980]: I0107 03:37:04.881376 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 07 03:37:04 crc kubenswrapper[4980]: I0107 03:37:04.903350 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 07 03:37:05 crc kubenswrapper[4980]: I0107 03:37:05.109192 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 07 03:37:05 crc kubenswrapper[4980]: I0107 03:37:05.144374 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 07 03:37:05 crc kubenswrapper[4980]: I0107 03:37:05.164138 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 07 03:37:05 crc kubenswrapper[4980]: I0107 03:37:05.219253 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 07 03:37:05 crc kubenswrapper[4980]: I0107 03:37:05.249107 4980 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 07 03:37:05 crc kubenswrapper[4980]: I0107 03:37:05.255794 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 07 03:37:05 crc kubenswrapper[4980]: I0107 03:37:05.300741 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 07 03:37:05 crc kubenswrapper[4980]: I0107 03:37:05.304152 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 07 03:37:05 crc kubenswrapper[4980]: I0107 03:37:05.317463 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 07 03:37:05 crc kubenswrapper[4980]: I0107 03:37:05.361948 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 07 03:37:05 crc kubenswrapper[4980]: I0107 03:37:05.439894 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 07 03:37:05 crc kubenswrapper[4980]: I0107 03:37:05.444310 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 07 03:37:05 crc kubenswrapper[4980]: I0107 03:37:05.597123 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 07 03:37:05 crc kubenswrapper[4980]: I0107 03:37:05.720450 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 07 03:37:05 crc kubenswrapper[4980]: I0107 03:37:05.739942 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 07 03:37:05 crc kubenswrapper[4980]: I0107 03:37:05.939405 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 07 03:37:05 crc kubenswrapper[4980]: I0107 03:37:05.983969 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 07 03:37:05 crc kubenswrapper[4980]: I0107 03:37:05.995905 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.001873 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.040079 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.190533 4980 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.260928 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.277348 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.382437 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.414960 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.415626 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.512529 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.513116 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.536444 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.632989 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.674491 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.737939 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.814770 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.894310 4980 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.894709 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://e05b40a0ed15f46f07a9b48867556942ce5b5da887ee39a080d8601f05080fd9" gracePeriod=5 Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.958970 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.968357 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.973635 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 07 03:37:06 crc kubenswrapper[4980]: I0107 03:37:06.976328 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 07 03:37:07 crc kubenswrapper[4980]: I0107 03:37:07.102114 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 07 03:37:07 crc kubenswrapper[4980]: I0107 03:37:07.107376 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 07 03:37:07 crc kubenswrapper[4980]: I0107 03:37:07.121786 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 07 03:37:07 crc kubenswrapper[4980]: I0107 03:37:07.153628 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 07 03:37:07 crc kubenswrapper[4980]: I0107 03:37:07.164899 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 07 03:37:07 crc kubenswrapper[4980]: I0107 03:37:07.265923 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 07 03:37:07 crc kubenswrapper[4980]: I0107 03:37:07.429957 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 07 03:37:07 crc kubenswrapper[4980]: I0107 03:37:07.484614 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 07 03:37:07 crc kubenswrapper[4980]: I0107 03:37:07.577603 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 07 03:37:07 crc kubenswrapper[4980]: I0107 03:37:07.587458 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 07 03:37:07 crc kubenswrapper[4980]: I0107 03:37:07.708926 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 07 03:37:07 crc kubenswrapper[4980]: I0107 03:37:07.711682 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 07 03:37:07 crc kubenswrapper[4980]: I0107 03:37:07.728011 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 07 03:37:07 crc kubenswrapper[4980]: I0107 03:37:07.873917 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 07 03:37:07 crc kubenswrapper[4980]: I0107 03:37:07.885716 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 07 03:37:07 crc kubenswrapper[4980]: I0107 03:37:07.945158 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.032935 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.079844 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.122988 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.134473 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.145361 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.165841 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.233340 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.268117 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.330758 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.333812 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.393210 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.483238 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.488645 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.506675 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.546948 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.565787 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.634185 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.701893 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.872674 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.885047 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 07 03:37:08 crc kubenswrapper[4980]: I0107 03:37:08.947477 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 07 03:37:09 crc kubenswrapper[4980]: I0107 03:37:09.031039 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 07 03:37:09 crc kubenswrapper[4980]: I0107 03:37:09.093491 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 07 03:37:09 crc kubenswrapper[4980]: I0107 03:37:09.150490 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 07 03:37:09 crc kubenswrapper[4980]: I0107 03:37:09.169890 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 07 03:37:09 crc kubenswrapper[4980]: I0107 03:37:09.187350 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 07 03:37:09 crc kubenswrapper[4980]: I0107 03:37:09.250302 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 07 03:37:09 crc kubenswrapper[4980]: I0107 03:37:09.313730 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 07 03:37:09 crc kubenswrapper[4980]: I0107 03:37:09.332942 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 07 03:37:09 crc kubenswrapper[4980]: I0107 03:37:09.468626 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 07 03:37:09 crc kubenswrapper[4980]: I0107 03:37:09.484310 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 07 03:37:09 crc kubenswrapper[4980]: I0107 03:37:09.570288 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 07 03:37:09 crc kubenswrapper[4980]: I0107 03:37:09.596356 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 07 03:37:09 crc kubenswrapper[4980]: I0107 03:37:09.621911 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 07 03:37:09 crc kubenswrapper[4980]: I0107 03:37:09.634331 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 07 03:37:09 crc kubenswrapper[4980]: I0107 03:37:09.638158 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 07 03:37:09 crc kubenswrapper[4980]: I0107 03:37:09.641287 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 07 03:37:09 crc kubenswrapper[4980]: I0107 03:37:09.664155 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 07 03:37:09 crc kubenswrapper[4980]: I0107 03:37:09.824726 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 07 03:37:10 crc kubenswrapper[4980]: I0107 03:37:10.163980 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 07 03:37:10 crc kubenswrapper[4980]: I0107 03:37:10.337520 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 07 03:37:10 crc kubenswrapper[4980]: I0107 03:37:10.384215 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 07 03:37:10 crc kubenswrapper[4980]: I0107 03:37:10.393342 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 07 03:37:10 crc kubenswrapper[4980]: I0107 03:37:10.430677 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 07 03:37:10 crc kubenswrapper[4980]: I0107 03:37:10.866735 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 07 03:37:10 crc kubenswrapper[4980]: I0107 03:37:10.983642 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 07 03:37:11 crc kubenswrapper[4980]: I0107 03:37:11.020770 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 07 03:37:11 crc kubenswrapper[4980]: I0107 03:37:11.038338 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 07 03:37:11 crc kubenswrapper[4980]: I0107 03:37:11.046265 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 07 03:37:11 crc kubenswrapper[4980]: I0107 03:37:11.147942 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 07 03:37:11 crc kubenswrapper[4980]: I0107 03:37:11.169372 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 07 03:37:11 crc kubenswrapper[4980]: I0107 03:37:11.209776 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 07 03:37:11 crc kubenswrapper[4980]: I0107 03:37:11.213878 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 07 03:37:11 crc kubenswrapper[4980]: I0107 03:37:11.265441 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 07 03:37:11 crc kubenswrapper[4980]: I0107 03:37:11.275122 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 07 03:37:11 crc kubenswrapper[4980]: I0107 03:37:11.324085 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 07 03:37:11 crc kubenswrapper[4980]: I0107 03:37:11.477634 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 07 03:37:11 crc kubenswrapper[4980]: I0107 03:37:11.536309 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 07 03:37:11 crc kubenswrapper[4980]: I0107 03:37:11.882722 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 07 03:37:11 crc kubenswrapper[4980]: I0107 03:37:11.914650 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 07 03:37:11 crc kubenswrapper[4980]: I0107 03:37:11.977384 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.052232 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.167849 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.222792 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.247923 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.441448 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.502117 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.502792 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.559856 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.652279 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.652352 4980 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="e05b40a0ed15f46f07a9b48867556942ce5b5da887ee39a080d8601f05080fd9" exitCode=137 Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.652403 4980 scope.go:117] "RemoveContainer" containerID="e05b40a0ed15f46f07a9b48867556942ce5b5da887ee39a080d8601f05080fd9" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.652425 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.677014 4980 scope.go:117] "RemoveContainer" containerID="e05b40a0ed15f46f07a9b48867556942ce5b5da887ee39a080d8601f05080fd9" Jan 07 03:37:12 crc kubenswrapper[4980]: E0107 03:37:12.677913 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e05b40a0ed15f46f07a9b48867556942ce5b5da887ee39a080d8601f05080fd9\": container with ID starting with e05b40a0ed15f46f07a9b48867556942ce5b5da887ee39a080d8601f05080fd9 not found: ID does not exist" containerID="e05b40a0ed15f46f07a9b48867556942ce5b5da887ee39a080d8601f05080fd9" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.677985 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e05b40a0ed15f46f07a9b48867556942ce5b5da887ee39a080d8601f05080fd9"} err="failed to get container status \"e05b40a0ed15f46f07a9b48867556942ce5b5da887ee39a080d8601f05080fd9\": rpc error: code = NotFound desc = could not find container \"e05b40a0ed15f46f07a9b48867556942ce5b5da887ee39a080d8601f05080fd9\": container with ID starting with e05b40a0ed15f46f07a9b48867556942ce5b5da887ee39a080d8601f05080fd9 not found: ID does not exist" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.680199 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.680291 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.680301 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.680466 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.680534 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.680591 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.680607 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.680689 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.680732 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.681025 4980 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.681059 4980 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.681077 4980 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.681092 4980 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.689484 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.748050 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.781958 4980 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:12 crc kubenswrapper[4980]: I0107 03:37:12.975587 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 07 03:37:13 crc kubenswrapper[4980]: I0107 03:37:13.098105 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 07 03:37:13 crc kubenswrapper[4980]: I0107 03:37:13.515848 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 07 03:37:13 crc kubenswrapper[4980]: I0107 03:37:13.527982 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 07 03:37:13 crc kubenswrapper[4980]: I0107 03:37:13.593498 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 07 03:37:13 crc kubenswrapper[4980]: I0107 03:37:13.689918 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 07 03:37:13 crc kubenswrapper[4980]: I0107 03:37:13.743227 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 07 03:37:13 crc kubenswrapper[4980]: I0107 03:37:13.745952 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 07 03:37:14 crc kubenswrapper[4980]: I0107 03:37:14.923854 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.215207 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2bfnr"] Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.215889 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2bfnr" podUID="7749febc-7b8f-4a6b-96e8-2579f281cede" containerName="registry-server" containerID="cri-o://99797584c3f165e1ad63b5dc4b9af1ddd9d4fe4d4ea094d12343bda6070c895b" gracePeriod=30 Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.220708 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-slxp5"] Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.220974 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-slxp5" podUID="83605c82-2947-4e84-8657-e9d040571dae" containerName="registry-server" containerID="cri-o://5aed9a2179e9b384763b1e5cfa5f6a81f95bb18e6f32a5e548560e5cdbf25a0e" gracePeriod=30 Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.233885 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hc49h"] Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.234114 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" podUID="5c60a3ab-4428-4658-9d0b-5ed1608bd379" containerName="marketplace-operator" containerID="cri-o://f2eab25c8a3929a296531e2fc050430c5a19455278bb97e864c6f82d4937d756" gracePeriod=30 Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.253343 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c5kx"] Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.253672 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9c5kx" podUID="16999f3a-e9cd-449c-bb6f-72b7759cb32e" containerName="registry-server" containerID="cri-o://8b0b0e2ab6146bd712e164f64f41873325ab4ceb00497991617f55316c214c16" gracePeriod=30 Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.260115 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2mq9"] Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.260523 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k2mq9" podUID="de1dad58-7ec1-4867-8494-044120bf894b" containerName="registry-server" containerID="cri-o://25b2ce87cf86990b0ce19b298827906d83d6827b9e6f66983e0e57bee66439d3" gracePeriod=30 Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.277868 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t5mfp"] Jan 07 03:37:20 crc kubenswrapper[4980]: E0107 03:37:20.278979 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.279089 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 07 03:37:20 crc kubenswrapper[4980]: E0107 03:37:20.279302 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af" containerName="installer" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.279402 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af" containerName="installer" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.279674 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.279788 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ac6db5a-85f4-4134-bf3b-2ecbdbbd44af" containerName="installer" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.280418 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-t5mfp" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.306527 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-2bfnr" podUID="7749febc-7b8f-4a6b-96e8-2579f281cede" containerName="registry-server" probeResult="failure" output="" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.307419 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-2bfnr" podUID="7749febc-7b8f-4a6b-96e8-2579f281cede" containerName="registry-server" probeResult="failure" output="" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.333665 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t5mfp"] Jan 07 03:37:20 crc kubenswrapper[4980]: E0107 03:37:20.353764 4980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5aed9a2179e9b384763b1e5cfa5f6a81f95bb18e6f32a5e548560e5cdbf25a0e is running failed: container process not found" containerID="5aed9a2179e9b384763b1e5cfa5f6a81f95bb18e6f32a5e548560e5cdbf25a0e" cmd=["grpc_health_probe","-addr=:50051"] Jan 07 03:37:20 crc kubenswrapper[4980]: E0107 03:37:20.354329 4980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5aed9a2179e9b384763b1e5cfa5f6a81f95bb18e6f32a5e548560e5cdbf25a0e is running failed: container process not found" containerID="5aed9a2179e9b384763b1e5cfa5f6a81f95bb18e6f32a5e548560e5cdbf25a0e" cmd=["grpc_health_probe","-addr=:50051"] Jan 07 03:37:20 crc kubenswrapper[4980]: E0107 03:37:20.355035 4980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5aed9a2179e9b384763b1e5cfa5f6a81f95bb18e6f32a5e548560e5cdbf25a0e is running failed: container process not found" containerID="5aed9a2179e9b384763b1e5cfa5f6a81f95bb18e6f32a5e548560e5cdbf25a0e" cmd=["grpc_health_probe","-addr=:50051"] Jan 07 03:37:20 crc kubenswrapper[4980]: E0107 03:37:20.355158 4980 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5aed9a2179e9b384763b1e5cfa5f6a81f95bb18e6f32a5e548560e5cdbf25a0e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-slxp5" podUID="83605c82-2947-4e84-8657-e9d040571dae" containerName="registry-server" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.381520 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cbc2df67-d00a-4200-b46f-b9eca0da0f4f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t5mfp\" (UID: \"cbc2df67-d00a-4200-b46f-b9eca0da0f4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mfp" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.381602 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cbc2df67-d00a-4200-b46f-b9eca0da0f4f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t5mfp\" (UID: \"cbc2df67-d00a-4200-b46f-b9eca0da0f4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mfp" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.381650 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw485\" (UniqueName: \"kubernetes.io/projected/cbc2df67-d00a-4200-b46f-b9eca0da0f4f-kube-api-access-jw485\") pod \"marketplace-operator-79b997595-t5mfp\" (UID: \"cbc2df67-d00a-4200-b46f-b9eca0da0f4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mfp" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.483084 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cbc2df67-d00a-4200-b46f-b9eca0da0f4f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t5mfp\" (UID: \"cbc2df67-d00a-4200-b46f-b9eca0da0f4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mfp" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.483143 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cbc2df67-d00a-4200-b46f-b9eca0da0f4f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t5mfp\" (UID: \"cbc2df67-d00a-4200-b46f-b9eca0da0f4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mfp" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.483187 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw485\" (UniqueName: \"kubernetes.io/projected/cbc2df67-d00a-4200-b46f-b9eca0da0f4f-kube-api-access-jw485\") pod \"marketplace-operator-79b997595-t5mfp\" (UID: \"cbc2df67-d00a-4200-b46f-b9eca0da0f4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mfp" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.484841 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cbc2df67-d00a-4200-b46f-b9eca0da0f4f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t5mfp\" (UID: \"cbc2df67-d00a-4200-b46f-b9eca0da0f4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mfp" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.492453 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cbc2df67-d00a-4200-b46f-b9eca0da0f4f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t5mfp\" (UID: \"cbc2df67-d00a-4200-b46f-b9eca0da0f4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mfp" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.501330 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw485\" (UniqueName: \"kubernetes.io/projected/cbc2df67-d00a-4200-b46f-b9eca0da0f4f-kube-api-access-jw485\") pod \"marketplace-operator-79b997595-t5mfp\" (UID: \"cbc2df67-d00a-4200-b46f-b9eca0da0f4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mfp" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.595473 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-t5mfp" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.706832 4980 generic.go:334] "Generic (PLEG): container finished" podID="5c60a3ab-4428-4658-9d0b-5ed1608bd379" containerID="f2eab25c8a3929a296531e2fc050430c5a19455278bb97e864c6f82d4937d756" exitCode=0 Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.706967 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" event={"ID":"5c60a3ab-4428-4658-9d0b-5ed1608bd379","Type":"ContainerDied","Data":"f2eab25c8a3929a296531e2fc050430c5a19455278bb97e864c6f82d4937d756"} Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.710779 4980 generic.go:334] "Generic (PLEG): container finished" podID="7749febc-7b8f-4a6b-96e8-2579f281cede" containerID="99797584c3f165e1ad63b5dc4b9af1ddd9d4fe4d4ea094d12343bda6070c895b" exitCode=0 Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.710873 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bfnr" event={"ID":"7749febc-7b8f-4a6b-96e8-2579f281cede","Type":"ContainerDied","Data":"99797584c3f165e1ad63b5dc4b9af1ddd9d4fe4d4ea094d12343bda6070c895b"} Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.710944 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bfnr" event={"ID":"7749febc-7b8f-4a6b-96e8-2579f281cede","Type":"ContainerDied","Data":"f273c7007e67c918cb1d5380a9b70e75e513afd6d5951cf313ad478d4d627d32"} Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.710974 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f273c7007e67c918cb1d5380a9b70e75e513afd6d5951cf313ad478d4d627d32" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.718993 4980 generic.go:334] "Generic (PLEG): container finished" podID="16999f3a-e9cd-449c-bb6f-72b7759cb32e" containerID="8b0b0e2ab6146bd712e164f64f41873325ab4ceb00497991617f55316c214c16" exitCode=0 Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.719176 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c5kx" event={"ID":"16999f3a-e9cd-449c-bb6f-72b7759cb32e","Type":"ContainerDied","Data":"8b0b0e2ab6146bd712e164f64f41873325ab4ceb00497991617f55316c214c16"} Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.738018 4980 generic.go:334] "Generic (PLEG): container finished" podID="83605c82-2947-4e84-8657-e9d040571dae" containerID="5aed9a2179e9b384763b1e5cfa5f6a81f95bb18e6f32a5e548560e5cdbf25a0e" exitCode=0 Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.738092 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-slxp5" event={"ID":"83605c82-2947-4e84-8657-e9d040571dae","Type":"ContainerDied","Data":"5aed9a2179e9b384763b1e5cfa5f6a81f95bb18e6f32a5e548560e5cdbf25a0e"} Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.742297 4980 generic.go:334] "Generic (PLEG): container finished" podID="de1dad58-7ec1-4867-8494-044120bf894b" containerID="25b2ce87cf86990b0ce19b298827906d83d6827b9e6f66983e0e57bee66439d3" exitCode=0 Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.742351 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2mq9" event={"ID":"de1dad58-7ec1-4867-8494-044120bf894b","Type":"ContainerDied","Data":"25b2ce87cf86990b0ce19b298827906d83d6827b9e6f66983e0e57bee66439d3"} Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.813007 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bfnr" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.819382 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.826375 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2mq9" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.844194 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c5kx" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.872415 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-slxp5" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.990348 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83605c82-2947-4e84-8657-e9d040571dae-catalog-content\") pod \"83605c82-2947-4e84-8657-e9d040571dae\" (UID: \"83605c82-2947-4e84-8657-e9d040571dae\") " Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.990903 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k22pv\" (UniqueName: \"kubernetes.io/projected/5c60a3ab-4428-4658-9d0b-5ed1608bd379-kube-api-access-k22pv\") pod \"5c60a3ab-4428-4658-9d0b-5ed1608bd379\" (UID: \"5c60a3ab-4428-4658-9d0b-5ed1608bd379\") " Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.991020 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7bdh\" (UniqueName: \"kubernetes.io/projected/de1dad58-7ec1-4867-8494-044120bf894b-kube-api-access-n7bdh\") pod \"de1dad58-7ec1-4867-8494-044120bf894b\" (UID: \"de1dad58-7ec1-4867-8494-044120bf894b\") " Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.991125 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c60a3ab-4428-4658-9d0b-5ed1608bd379-marketplace-trusted-ca\") pod \"5c60a3ab-4428-4658-9d0b-5ed1608bd379\" (UID: \"5c60a3ab-4428-4658-9d0b-5ed1608bd379\") " Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.991228 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1dad58-7ec1-4867-8494-044120bf894b-utilities\") pod \"de1dad58-7ec1-4867-8494-044120bf894b\" (UID: \"de1dad58-7ec1-4867-8494-044120bf894b\") " Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.992225 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16999f3a-e9cd-449c-bb6f-72b7759cb32e-catalog-content\") pod \"16999f3a-e9cd-449c-bb6f-72b7759cb32e\" (UID: \"16999f3a-e9cd-449c-bb6f-72b7759cb32e\") " Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.998680 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7749febc-7b8f-4a6b-96e8-2579f281cede-catalog-content\") pod \"7749febc-7b8f-4a6b-96e8-2579f281cede\" (UID: \"7749febc-7b8f-4a6b-96e8-2579f281cede\") " Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.998728 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7rkf\" (UniqueName: \"kubernetes.io/projected/83605c82-2947-4e84-8657-e9d040571dae-kube-api-access-p7rkf\") pod \"83605c82-2947-4e84-8657-e9d040571dae\" (UID: \"83605c82-2947-4e84-8657-e9d040571dae\") " Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.998812 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7749febc-7b8f-4a6b-96e8-2579f281cede-utilities\") pod \"7749febc-7b8f-4a6b-96e8-2579f281cede\" (UID: \"7749febc-7b8f-4a6b-96e8-2579f281cede\") " Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.998870 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfql8\" (UniqueName: \"kubernetes.io/projected/16999f3a-e9cd-449c-bb6f-72b7759cb32e-kube-api-access-rfql8\") pod \"16999f3a-e9cd-449c-bb6f-72b7759cb32e\" (UID: \"16999f3a-e9cd-449c-bb6f-72b7759cb32e\") " Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.998894 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16999f3a-e9cd-449c-bb6f-72b7759cb32e-utilities\") pod \"16999f3a-e9cd-449c-bb6f-72b7759cb32e\" (UID: \"16999f3a-e9cd-449c-bb6f-72b7759cb32e\") " Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.998931 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnp6h\" (UniqueName: \"kubernetes.io/projected/7749febc-7b8f-4a6b-96e8-2579f281cede-kube-api-access-vnp6h\") pod \"7749febc-7b8f-4a6b-96e8-2579f281cede\" (UID: \"7749febc-7b8f-4a6b-96e8-2579f281cede\") " Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.998960 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5c60a3ab-4428-4658-9d0b-5ed1608bd379-marketplace-operator-metrics\") pod \"5c60a3ab-4428-4658-9d0b-5ed1608bd379\" (UID: \"5c60a3ab-4428-4658-9d0b-5ed1608bd379\") " Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.992064 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1dad58-7ec1-4867-8494-044120bf894b-utilities" (OuterVolumeSpecName: "utilities") pod "de1dad58-7ec1-4867-8494-044120bf894b" (UID: "de1dad58-7ec1-4867-8494-044120bf894b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.999006 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1dad58-7ec1-4867-8494-044120bf894b-catalog-content\") pod \"de1dad58-7ec1-4867-8494-044120bf894b\" (UID: \"de1dad58-7ec1-4867-8494-044120bf894b\") " Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.992158 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c60a3ab-4428-4658-9d0b-5ed1608bd379-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "5c60a3ab-4428-4658-9d0b-5ed1608bd379" (UID: "5c60a3ab-4428-4658-9d0b-5ed1608bd379"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.997223 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1dad58-7ec1-4867-8494-044120bf894b-kube-api-access-n7bdh" (OuterVolumeSpecName: "kube-api-access-n7bdh") pod "de1dad58-7ec1-4867-8494-044120bf894b" (UID: "de1dad58-7ec1-4867-8494-044120bf894b"). InnerVolumeSpecName "kube-api-access-n7bdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.999028 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83605c82-2947-4e84-8657-e9d040571dae-utilities\") pod \"83605c82-2947-4e84-8657-e9d040571dae\" (UID: \"83605c82-2947-4e84-8657-e9d040571dae\") " Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.997937 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c60a3ab-4428-4658-9d0b-5ed1608bd379-kube-api-access-k22pv" (OuterVolumeSpecName: "kube-api-access-k22pv") pod "5c60a3ab-4428-4658-9d0b-5ed1608bd379" (UID: "5c60a3ab-4428-4658-9d0b-5ed1608bd379"). InnerVolumeSpecName "kube-api-access-k22pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.999588 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k22pv\" (UniqueName: \"kubernetes.io/projected/5c60a3ab-4428-4658-9d0b-5ed1608bd379-kube-api-access-k22pv\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.999601 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7bdh\" (UniqueName: \"kubernetes.io/projected/de1dad58-7ec1-4867-8494-044120bf894b-kube-api-access-n7bdh\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.999612 4980 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c60a3ab-4428-4658-9d0b-5ed1608bd379-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:20 crc kubenswrapper[4980]: I0107 03:37:20.999622 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1dad58-7ec1-4867-8494-044120bf894b-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.000273 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83605c82-2947-4e84-8657-e9d040571dae-utilities" (OuterVolumeSpecName: "utilities") pod "83605c82-2947-4e84-8657-e9d040571dae" (UID: "83605c82-2947-4e84-8657-e9d040571dae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.000882 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7749febc-7b8f-4a6b-96e8-2579f281cede-utilities" (OuterVolumeSpecName: "utilities") pod "7749febc-7b8f-4a6b-96e8-2579f281cede" (UID: "7749febc-7b8f-4a6b-96e8-2579f281cede"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.001475 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83605c82-2947-4e84-8657-e9d040571dae-kube-api-access-p7rkf" (OuterVolumeSpecName: "kube-api-access-p7rkf") pod "83605c82-2947-4e84-8657-e9d040571dae" (UID: "83605c82-2947-4e84-8657-e9d040571dae"). InnerVolumeSpecName "kube-api-access-p7rkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.001495 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16999f3a-e9cd-449c-bb6f-72b7759cb32e-utilities" (OuterVolumeSpecName: "utilities") pod "16999f3a-e9cd-449c-bb6f-72b7759cb32e" (UID: "16999f3a-e9cd-449c-bb6f-72b7759cb32e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.002421 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7749febc-7b8f-4a6b-96e8-2579f281cede-kube-api-access-vnp6h" (OuterVolumeSpecName: "kube-api-access-vnp6h") pod "7749febc-7b8f-4a6b-96e8-2579f281cede" (UID: "7749febc-7b8f-4a6b-96e8-2579f281cede"). InnerVolumeSpecName "kube-api-access-vnp6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.002658 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16999f3a-e9cd-449c-bb6f-72b7759cb32e-kube-api-access-rfql8" (OuterVolumeSpecName: "kube-api-access-rfql8") pod "16999f3a-e9cd-449c-bb6f-72b7759cb32e" (UID: "16999f3a-e9cd-449c-bb6f-72b7759cb32e"). InnerVolumeSpecName "kube-api-access-rfql8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.004580 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c60a3ab-4428-4658-9d0b-5ed1608bd379-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "5c60a3ab-4428-4658-9d0b-5ed1608bd379" (UID: "5c60a3ab-4428-4658-9d0b-5ed1608bd379"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.021328 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16999f3a-e9cd-449c-bb6f-72b7759cb32e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16999f3a-e9cd-449c-bb6f-72b7759cb32e" (UID: "16999f3a-e9cd-449c-bb6f-72b7759cb32e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.068533 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83605c82-2947-4e84-8657-e9d040571dae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83605c82-2947-4e84-8657-e9d040571dae" (UID: "83605c82-2947-4e84-8657-e9d040571dae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.084377 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7749febc-7b8f-4a6b-96e8-2579f281cede-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7749febc-7b8f-4a6b-96e8-2579f281cede" (UID: "7749febc-7b8f-4a6b-96e8-2579f281cede"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.085049 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t5mfp"] Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.100451 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7749febc-7b8f-4a6b-96e8-2579f281cede-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.100476 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfql8\" (UniqueName: \"kubernetes.io/projected/16999f3a-e9cd-449c-bb6f-72b7759cb32e-kube-api-access-rfql8\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.100490 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16999f3a-e9cd-449c-bb6f-72b7759cb32e-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.100501 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnp6h\" (UniqueName: \"kubernetes.io/projected/7749febc-7b8f-4a6b-96e8-2579f281cede-kube-api-access-vnp6h\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.100510 4980 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5c60a3ab-4428-4658-9d0b-5ed1608bd379-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.100520 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83605c82-2947-4e84-8657-e9d040571dae-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.100529 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83605c82-2947-4e84-8657-e9d040571dae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.100538 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16999f3a-e9cd-449c-bb6f-72b7759cb32e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.100547 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7749febc-7b8f-4a6b-96e8-2579f281cede-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.100571 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7rkf\" (UniqueName: \"kubernetes.io/projected/83605c82-2947-4e84-8657-e9d040571dae-kube-api-access-p7rkf\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.160458 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1dad58-7ec1-4867-8494-044120bf894b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de1dad58-7ec1-4867-8494-044120bf894b" (UID: "de1dad58-7ec1-4867-8494-044120bf894b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.202697 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1dad58-7ec1-4867-8494-044120bf894b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.753384 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2mq9" event={"ID":"de1dad58-7ec1-4867-8494-044120bf894b","Type":"ContainerDied","Data":"d2dc1e1aa59886a8e8d41172167664dddf18a0865244ea746886936b1ecf3498"} Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.753463 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2mq9" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.754194 4980 scope.go:117] "RemoveContainer" containerID="25b2ce87cf86990b0ce19b298827906d83d6827b9e6f66983e0e57bee66439d3" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.756052 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" event={"ID":"5c60a3ab-4428-4658-9d0b-5ed1608bd379","Type":"ContainerDied","Data":"de39d6d0a50cb0c33278b98318480915e2eaac1766c971517c38e8da8fe84fd5"} Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.756123 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hc49h" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.761472 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c5kx" event={"ID":"16999f3a-e9cd-449c-bb6f-72b7759cb32e","Type":"ContainerDied","Data":"37228e7429e05af6303cad94624f5c40a5a04f09e542e5de5130f8b61b3bdbd0"} Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.761693 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c5kx" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.765799 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-slxp5" event={"ID":"83605c82-2947-4e84-8657-e9d040571dae","Type":"ContainerDied","Data":"7a063e83305c6dceeffaa05c82a5b8a487b7df622f9b5b73f3d32d6593dabe7b"} Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.766023 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-slxp5" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.768294 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bfnr" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.772200 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-t5mfp" event={"ID":"cbc2df67-d00a-4200-b46f-b9eca0da0f4f","Type":"ContainerStarted","Data":"9b3616faa187a82197c3421a632ce916c1c61d023d2ec24959c6125d55fc6ce1"} Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.772246 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-t5mfp" event={"ID":"cbc2df67-d00a-4200-b46f-b9eca0da0f4f","Type":"ContainerStarted","Data":"03f48b08e12e38d287088b65d32b40f8c5e1d555be1105aff7ba73110010207a"} Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.773416 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-t5mfp" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.776416 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-t5mfp" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.789157 4980 scope.go:117] "RemoveContainer" containerID="1e184fe0692ba0ed84217fad990ab5f64066df8e968fc5ce8a0ca1e6c125b62b" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.833209 4980 scope.go:117] "RemoveContainer" containerID="b01f02352edafaf8b885715177b0c05292e3c86007c32ecc1f2a936e21585215" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.836739 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-t5mfp" podStartSLOduration=1.836676572 podStartE2EDuration="1.836676572s" podCreationTimestamp="2026-01-07 03:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:37:21.806326205 +0000 UTC m=+288.372020990" watchObservedRunningTime="2026-01-07 03:37:21.836676572 +0000 UTC m=+288.402371317" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.871217 4980 scope.go:117] "RemoveContainer" containerID="f2eab25c8a3929a296531e2fc050430c5a19455278bb97e864c6f82d4937d756" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.905226 4980 scope.go:117] "RemoveContainer" containerID="8b0b0e2ab6146bd712e164f64f41873325ab4ceb00497991617f55316c214c16" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.906629 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hc49h"] Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.917612 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hc49h"] Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.922234 4980 scope.go:117] "RemoveContainer" containerID="d228d6b3f5aa638b93530984e3b65752cc949e02c3fed88fd9be91fbbae0bc40" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.922515 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c5kx"] Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.925640 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c5kx"] Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.928348 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-slxp5"] Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.930935 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-slxp5"] Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.937055 4980 scope.go:117] "RemoveContainer" containerID="4c9dbc2c1e36fe43cdaa11d8339988f0e98465ff7c64ed3a9d91c56ebf6e35b9" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.956776 4980 scope.go:117] "RemoveContainer" containerID="5aed9a2179e9b384763b1e5cfa5f6a81f95bb18e6f32a5e548560e5cdbf25a0e" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.969139 4980 scope.go:117] "RemoveContainer" containerID="c223c25a607cb9ae6e6bdc96d08e812b847b20a0481954ab483a35b47df23a88" Jan 07 03:37:21 crc kubenswrapper[4980]: I0107 03:37:21.982176 4980 scope.go:117] "RemoveContainer" containerID="5ad0d0723787d429fc22e1e570d9a2eb355d47cfbe7b69fe9babe3bbae2a05de" Jan 07 03:37:23 crc kubenswrapper[4980]: I0107 03:37:23.747594 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16999f3a-e9cd-449c-bb6f-72b7759cb32e" path="/var/lib/kubelet/pods/16999f3a-e9cd-449c-bb6f-72b7759cb32e/volumes" Jan 07 03:37:23 crc kubenswrapper[4980]: I0107 03:37:23.748916 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c60a3ab-4428-4658-9d0b-5ed1608bd379" path="/var/lib/kubelet/pods/5c60a3ab-4428-4658-9d0b-5ed1608bd379/volumes" Jan 07 03:37:23 crc kubenswrapper[4980]: I0107 03:37:23.750074 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83605c82-2947-4e84-8657-e9d040571dae" path="/var/lib/kubelet/pods/83605c82-2947-4e84-8657-e9d040571dae/volumes" Jan 07 03:37:33 crc kubenswrapper[4980]: I0107 03:37:33.565211 4980 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 07 03:37:51 crc kubenswrapper[4980]: I0107 03:37:51.856604 4980 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podde1dad58-7ec1-4867-8494-044120bf894b"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podde1dad58-7ec1-4867-8494-044120bf894b] : Timed out while waiting for systemd to remove kubepods-burstable-podde1dad58_7ec1_4867_8494_044120bf894b.slice" Jan 07 03:37:51 crc kubenswrapper[4980]: E0107 03:37:51.857405 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable podde1dad58-7ec1-4867-8494-044120bf894b] : unable to destroy cgroup paths for cgroup [kubepods burstable podde1dad58-7ec1-4867-8494-044120bf894b] : Timed out while waiting for systemd to remove kubepods-burstable-podde1dad58_7ec1_4867_8494_044120bf894b.slice" pod="openshift-marketplace/redhat-operators-k2mq9" podUID="de1dad58-7ec1-4867-8494-044120bf894b" Jan 07 03:37:51 crc kubenswrapper[4980]: I0107 03:37:51.887322 4980 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod7749febc-7b8f-4a6b-96e8-2579f281cede"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod7749febc-7b8f-4a6b-96e8-2579f281cede] : Timed out while waiting for systemd to remove kubepods-burstable-pod7749febc_7b8f_4a6b_96e8_2579f281cede.slice" Jan 07 03:37:51 crc kubenswrapper[4980]: E0107 03:37:51.887735 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod7749febc-7b8f-4a6b-96e8-2579f281cede] : unable to destroy cgroup paths for cgroup [kubepods burstable pod7749febc-7b8f-4a6b-96e8-2579f281cede] : Timed out while waiting for systemd to remove kubepods-burstable-pod7749febc_7b8f_4a6b_96e8_2579f281cede.slice" pod="openshift-marketplace/certified-operators-2bfnr" podUID="7749febc-7b8f-4a6b-96e8-2579f281cede" Jan 07 03:37:51 crc kubenswrapper[4980]: I0107 03:37:51.977801 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2mq9" Jan 07 03:37:51 crc kubenswrapper[4980]: I0107 03:37:51.978284 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bfnr" Jan 07 03:37:52 crc kubenswrapper[4980]: I0107 03:37:52.026534 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2bfnr"] Jan 07 03:37:52 crc kubenswrapper[4980]: I0107 03:37:52.031267 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2bfnr"] Jan 07 03:37:52 crc kubenswrapper[4980]: I0107 03:37:52.039190 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2mq9"] Jan 07 03:37:52 crc kubenswrapper[4980]: I0107 03:37:52.048219 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k2mq9"] Jan 07 03:37:53 crc kubenswrapper[4980]: I0107 03:37:53.745120 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7749febc-7b8f-4a6b-96e8-2579f281cede" path="/var/lib/kubelet/pods/7749febc-7b8f-4a6b-96e8-2579f281cede/volumes" Jan 07 03:37:53 crc kubenswrapper[4980]: I0107 03:37:53.748205 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1dad58-7ec1-4867-8494-044120bf894b" path="/var/lib/kubelet/pods/de1dad58-7ec1-4867-8494-044120bf894b/volumes" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.184300 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-kp4dk"] Jan 07 03:38:05 crc kubenswrapper[4980]: E0107 03:38:05.185347 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16999f3a-e9cd-449c-bb6f-72b7759cb32e" containerName="registry-server" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.185364 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="16999f3a-e9cd-449c-bb6f-72b7759cb32e" containerName="registry-server" Jan 07 03:38:05 crc kubenswrapper[4980]: E0107 03:38:05.185375 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c60a3ab-4428-4658-9d0b-5ed1608bd379" containerName="marketplace-operator" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.185384 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c60a3ab-4428-4658-9d0b-5ed1608bd379" containerName="marketplace-operator" Jan 07 03:38:05 crc kubenswrapper[4980]: E0107 03:38:05.185395 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1dad58-7ec1-4867-8494-044120bf894b" containerName="registry-server" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.185404 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1dad58-7ec1-4867-8494-044120bf894b" containerName="registry-server" Jan 07 03:38:05 crc kubenswrapper[4980]: E0107 03:38:05.185419 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1dad58-7ec1-4867-8494-044120bf894b" containerName="extract-content" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.185428 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1dad58-7ec1-4867-8494-044120bf894b" containerName="extract-content" Jan 07 03:38:05 crc kubenswrapper[4980]: E0107 03:38:05.185446 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7749febc-7b8f-4a6b-96e8-2579f281cede" containerName="extract-utilities" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.185457 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="7749febc-7b8f-4a6b-96e8-2579f281cede" containerName="extract-utilities" Jan 07 03:38:05 crc kubenswrapper[4980]: E0107 03:38:05.185470 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83605c82-2947-4e84-8657-e9d040571dae" containerName="extract-utilities" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.185480 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="83605c82-2947-4e84-8657-e9d040571dae" containerName="extract-utilities" Jan 07 03:38:05 crc kubenswrapper[4980]: E0107 03:38:05.185495 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83605c82-2947-4e84-8657-e9d040571dae" containerName="extract-content" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.185505 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="83605c82-2947-4e84-8657-e9d040571dae" containerName="extract-content" Jan 07 03:38:05 crc kubenswrapper[4980]: E0107 03:38:05.185520 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7749febc-7b8f-4a6b-96e8-2579f281cede" containerName="extract-content" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.185529 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="7749febc-7b8f-4a6b-96e8-2579f281cede" containerName="extract-content" Jan 07 03:38:05 crc kubenswrapper[4980]: E0107 03:38:05.185540 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7749febc-7b8f-4a6b-96e8-2579f281cede" containerName="registry-server" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.185547 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="7749febc-7b8f-4a6b-96e8-2579f281cede" containerName="registry-server" Jan 07 03:38:05 crc kubenswrapper[4980]: E0107 03:38:05.185578 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83605c82-2947-4e84-8657-e9d040571dae" containerName="registry-server" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.185586 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="83605c82-2947-4e84-8657-e9d040571dae" containerName="registry-server" Jan 07 03:38:05 crc kubenswrapper[4980]: E0107 03:38:05.185594 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1dad58-7ec1-4867-8494-044120bf894b" containerName="extract-utilities" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.185602 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1dad58-7ec1-4867-8494-044120bf894b" containerName="extract-utilities" Jan 07 03:38:05 crc kubenswrapper[4980]: E0107 03:38:05.185612 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16999f3a-e9cd-449c-bb6f-72b7759cb32e" containerName="extract-content" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.185619 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="16999f3a-e9cd-449c-bb6f-72b7759cb32e" containerName="extract-content" Jan 07 03:38:05 crc kubenswrapper[4980]: E0107 03:38:05.185629 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16999f3a-e9cd-449c-bb6f-72b7759cb32e" containerName="extract-utilities" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.185637 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="16999f3a-e9cd-449c-bb6f-72b7759cb32e" containerName="extract-utilities" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.185750 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1dad58-7ec1-4867-8494-044120bf894b" containerName="registry-server" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.185762 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="16999f3a-e9cd-449c-bb6f-72b7759cb32e" containerName="registry-server" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.185779 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="83605c82-2947-4e84-8657-e9d040571dae" containerName="registry-server" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.185790 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c60a3ab-4428-4658-9d0b-5ed1608bd379" containerName="marketplace-operator" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.185800 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="7749febc-7b8f-4a6b-96e8-2579f281cede" containerName="registry-server" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.186214 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.205977 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-kp4dk"] Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.289728 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7e07cf46-3aed-4f64-a4e1-00e690e176b2-registry-tls\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.289835 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7e07cf46-3aed-4f64-a4e1-00e690e176b2-registry-certificates\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.289875 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f5t5\" (UniqueName: \"kubernetes.io/projected/7e07cf46-3aed-4f64-a4e1-00e690e176b2-kube-api-access-4f5t5\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.289980 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7e07cf46-3aed-4f64-a4e1-00e690e176b2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.290014 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7e07cf46-3aed-4f64-a4e1-00e690e176b2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.290061 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.290106 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e07cf46-3aed-4f64-a4e1-00e690e176b2-trusted-ca\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.290175 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e07cf46-3aed-4f64-a4e1-00e690e176b2-bound-sa-token\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.325924 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.391831 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e07cf46-3aed-4f64-a4e1-00e690e176b2-bound-sa-token\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.391917 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7e07cf46-3aed-4f64-a4e1-00e690e176b2-registry-tls\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.391983 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7e07cf46-3aed-4f64-a4e1-00e690e176b2-registry-certificates\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.392011 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f5t5\" (UniqueName: \"kubernetes.io/projected/7e07cf46-3aed-4f64-a4e1-00e690e176b2-kube-api-access-4f5t5\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.392055 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7e07cf46-3aed-4f64-a4e1-00e690e176b2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.392082 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7e07cf46-3aed-4f64-a4e1-00e690e176b2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.392115 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e07cf46-3aed-4f64-a4e1-00e690e176b2-trusted-ca\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.394594 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7e07cf46-3aed-4f64-a4e1-00e690e176b2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.394719 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e07cf46-3aed-4f64-a4e1-00e690e176b2-trusted-ca\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.395115 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7e07cf46-3aed-4f64-a4e1-00e690e176b2-registry-certificates\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.408331 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7e07cf46-3aed-4f64-a4e1-00e690e176b2-registry-tls\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.413899 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e07cf46-3aed-4f64-a4e1-00e690e176b2-bound-sa-token\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.420645 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7e07cf46-3aed-4f64-a4e1-00e690e176b2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.424506 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f5t5\" (UniqueName: \"kubernetes.io/projected/7e07cf46-3aed-4f64-a4e1-00e690e176b2-kube-api-access-4f5t5\") pod \"image-registry-66df7c8f76-kp4dk\" (UID: \"7e07cf46-3aed-4f64-a4e1-00e690e176b2\") " pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:05 crc kubenswrapper[4980]: I0107 03:38:05.518363 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:06 crc kubenswrapper[4980]: I0107 03:38:06.002779 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-kp4dk"] Jan 07 03:38:06 crc kubenswrapper[4980]: I0107 03:38:06.077378 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" event={"ID":"7e07cf46-3aed-4f64-a4e1-00e690e176b2","Type":"ContainerStarted","Data":"cb3cbdc123e3e782d03588fb735a724f565d84d428b427262ddfff7fab89bb29"} Jan 07 03:38:06 crc kubenswrapper[4980]: I0107 03:38:06.542781 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:38:06 crc kubenswrapper[4980]: I0107 03:38:06.542857 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:38:07 crc kubenswrapper[4980]: I0107 03:38:07.087300 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" event={"ID":"7e07cf46-3aed-4f64-a4e1-00e690e176b2","Type":"ContainerStarted","Data":"b1ba23c6446e2249357e7ba57eea5e39e7b22eeaac12d5e92c85125d490bebd4"} Jan 07 03:38:07 crc kubenswrapper[4980]: I0107 03:38:07.087579 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:07 crc kubenswrapper[4980]: I0107 03:38:07.111818 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" podStartSLOduration=2.111791874 podStartE2EDuration="2.111791874s" podCreationTimestamp="2026-01-07 03:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:38:07.108745125 +0000 UTC m=+333.674439890" watchObservedRunningTime="2026-01-07 03:38:07.111791874 +0000 UTC m=+333.677486649" Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.587027 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5h6xq"] Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.592309 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5h6xq" Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.595418 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.601423 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5h6xq"] Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.700435 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edc1e23f-0cd2-4ab5-be99-0e90aa809529-catalog-content\") pod \"redhat-operators-5h6xq\" (UID: \"edc1e23f-0cd2-4ab5-be99-0e90aa809529\") " pod="openshift-marketplace/redhat-operators-5h6xq" Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.700514 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm7hw\" (UniqueName: \"kubernetes.io/projected/edc1e23f-0cd2-4ab5-be99-0e90aa809529-kube-api-access-tm7hw\") pod \"redhat-operators-5h6xq\" (UID: \"edc1e23f-0cd2-4ab5-be99-0e90aa809529\") " pod="openshift-marketplace/redhat-operators-5h6xq" Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.700610 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edc1e23f-0cd2-4ab5-be99-0e90aa809529-utilities\") pod \"redhat-operators-5h6xq\" (UID: \"edc1e23f-0cd2-4ab5-be99-0e90aa809529\") " pod="openshift-marketplace/redhat-operators-5h6xq" Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.778320 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qdt7j"] Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.780010 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdt7j" Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.782473 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.799728 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdt7j"] Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.803011 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edc1e23f-0cd2-4ab5-be99-0e90aa809529-utilities\") pod \"redhat-operators-5h6xq\" (UID: \"edc1e23f-0cd2-4ab5-be99-0e90aa809529\") " pod="openshift-marketplace/redhat-operators-5h6xq" Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.803115 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edc1e23f-0cd2-4ab5-be99-0e90aa809529-catalog-content\") pod \"redhat-operators-5h6xq\" (UID: \"edc1e23f-0cd2-4ab5-be99-0e90aa809529\") " pod="openshift-marketplace/redhat-operators-5h6xq" Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.803143 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm7hw\" (UniqueName: \"kubernetes.io/projected/edc1e23f-0cd2-4ab5-be99-0e90aa809529-kube-api-access-tm7hw\") pod \"redhat-operators-5h6xq\" (UID: \"edc1e23f-0cd2-4ab5-be99-0e90aa809529\") " pod="openshift-marketplace/redhat-operators-5h6xq" Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.803906 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edc1e23f-0cd2-4ab5-be99-0e90aa809529-catalog-content\") pod \"redhat-operators-5h6xq\" (UID: \"edc1e23f-0cd2-4ab5-be99-0e90aa809529\") " pod="openshift-marketplace/redhat-operators-5h6xq" Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.804242 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edc1e23f-0cd2-4ab5-be99-0e90aa809529-utilities\") pod \"redhat-operators-5h6xq\" (UID: \"edc1e23f-0cd2-4ab5-be99-0e90aa809529\") " pod="openshift-marketplace/redhat-operators-5h6xq" Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.840733 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm7hw\" (UniqueName: \"kubernetes.io/projected/edc1e23f-0cd2-4ab5-be99-0e90aa809529-kube-api-access-tm7hw\") pod \"redhat-operators-5h6xq\" (UID: \"edc1e23f-0cd2-4ab5-be99-0e90aa809529\") " pod="openshift-marketplace/redhat-operators-5h6xq" Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.905007 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a3ee567-8dae-40c8-9f42-6a01ec72b480-catalog-content\") pod \"redhat-marketplace-qdt7j\" (UID: \"4a3ee567-8dae-40c8-9f42-6a01ec72b480\") " pod="openshift-marketplace/redhat-marketplace-qdt7j" Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.905375 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhz4m\" (UniqueName: \"kubernetes.io/projected/4a3ee567-8dae-40c8-9f42-6a01ec72b480-kube-api-access-hhz4m\") pod \"redhat-marketplace-qdt7j\" (UID: \"4a3ee567-8dae-40c8-9f42-6a01ec72b480\") " pod="openshift-marketplace/redhat-marketplace-qdt7j" Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.905506 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a3ee567-8dae-40c8-9f42-6a01ec72b480-utilities\") pod \"redhat-marketplace-qdt7j\" (UID: \"4a3ee567-8dae-40c8-9f42-6a01ec72b480\") " pod="openshift-marketplace/redhat-marketplace-qdt7j" Jan 07 03:38:24 crc kubenswrapper[4980]: I0107 03:38:24.930736 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5h6xq" Jan 07 03:38:25 crc kubenswrapper[4980]: I0107 03:38:25.006055 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a3ee567-8dae-40c8-9f42-6a01ec72b480-utilities\") pod \"redhat-marketplace-qdt7j\" (UID: \"4a3ee567-8dae-40c8-9f42-6a01ec72b480\") " pod="openshift-marketplace/redhat-marketplace-qdt7j" Jan 07 03:38:25 crc kubenswrapper[4980]: I0107 03:38:25.006108 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a3ee567-8dae-40c8-9f42-6a01ec72b480-catalog-content\") pod \"redhat-marketplace-qdt7j\" (UID: \"4a3ee567-8dae-40c8-9f42-6a01ec72b480\") " pod="openshift-marketplace/redhat-marketplace-qdt7j" Jan 07 03:38:25 crc kubenswrapper[4980]: I0107 03:38:25.006142 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhz4m\" (UniqueName: \"kubernetes.io/projected/4a3ee567-8dae-40c8-9f42-6a01ec72b480-kube-api-access-hhz4m\") pod \"redhat-marketplace-qdt7j\" (UID: \"4a3ee567-8dae-40c8-9f42-6a01ec72b480\") " pod="openshift-marketplace/redhat-marketplace-qdt7j" Jan 07 03:38:25 crc kubenswrapper[4980]: I0107 03:38:25.006928 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a3ee567-8dae-40c8-9f42-6a01ec72b480-utilities\") pod \"redhat-marketplace-qdt7j\" (UID: \"4a3ee567-8dae-40c8-9f42-6a01ec72b480\") " pod="openshift-marketplace/redhat-marketplace-qdt7j" Jan 07 03:38:25 crc kubenswrapper[4980]: I0107 03:38:25.007169 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a3ee567-8dae-40c8-9f42-6a01ec72b480-catalog-content\") pod \"redhat-marketplace-qdt7j\" (UID: \"4a3ee567-8dae-40c8-9f42-6a01ec72b480\") " pod="openshift-marketplace/redhat-marketplace-qdt7j" Jan 07 03:38:25 crc kubenswrapper[4980]: I0107 03:38:25.031026 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhz4m\" (UniqueName: \"kubernetes.io/projected/4a3ee567-8dae-40c8-9f42-6a01ec72b480-kube-api-access-hhz4m\") pod \"redhat-marketplace-qdt7j\" (UID: \"4a3ee567-8dae-40c8-9f42-6a01ec72b480\") " pod="openshift-marketplace/redhat-marketplace-qdt7j" Jan 07 03:38:25 crc kubenswrapper[4980]: I0107 03:38:25.102994 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdt7j" Jan 07 03:38:25 crc kubenswrapper[4980]: I0107 03:38:25.166220 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5h6xq"] Jan 07 03:38:25 crc kubenswrapper[4980]: W0107 03:38:25.180435 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedc1e23f_0cd2_4ab5_be99_0e90aa809529.slice/crio-ba4d9ad597aeb1da7831d81e41be8fd0ec106712a31f2bac7706317d886a3b98 WatchSource:0}: Error finding container ba4d9ad597aeb1da7831d81e41be8fd0ec106712a31f2bac7706317d886a3b98: Status 404 returned error can't find the container with id ba4d9ad597aeb1da7831d81e41be8fd0ec106712a31f2bac7706317d886a3b98 Jan 07 03:38:25 crc kubenswrapper[4980]: I0107 03:38:25.201731 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h6xq" event={"ID":"edc1e23f-0cd2-4ab5-be99-0e90aa809529","Type":"ContainerStarted","Data":"ba4d9ad597aeb1da7831d81e41be8fd0ec106712a31f2bac7706317d886a3b98"} Jan 07 03:38:25 crc kubenswrapper[4980]: I0107 03:38:25.362080 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdt7j"] Jan 07 03:38:25 crc kubenswrapper[4980]: W0107 03:38:25.367303 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a3ee567_8dae_40c8_9f42_6a01ec72b480.slice/crio-cd1d0551cd9b7f80a8c7ef68c4f0d2b35cdb69acebd8bb644e9f70b0de249f04 WatchSource:0}: Error finding container cd1d0551cd9b7f80a8c7ef68c4f0d2b35cdb69acebd8bb644e9f70b0de249f04: Status 404 returned error can't find the container with id cd1d0551cd9b7f80a8c7ef68c4f0d2b35cdb69acebd8bb644e9f70b0de249f04 Jan 07 03:38:25 crc kubenswrapper[4980]: I0107 03:38:25.526678 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-kp4dk" Jan 07 03:38:25 crc kubenswrapper[4980]: I0107 03:38:25.598930 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nql9v"] Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.210867 4980 generic.go:334] "Generic (PLEG): container finished" podID="4a3ee567-8dae-40c8-9f42-6a01ec72b480" containerID="2f97e5c17737b7bf465c78617e2559d550271e3d022d102ab9c02e4d3813d544" exitCode=0 Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.211391 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdt7j" event={"ID":"4a3ee567-8dae-40c8-9f42-6a01ec72b480","Type":"ContainerDied","Data":"2f97e5c17737b7bf465c78617e2559d550271e3d022d102ab9c02e4d3813d544"} Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.211444 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdt7j" event={"ID":"4a3ee567-8dae-40c8-9f42-6a01ec72b480","Type":"ContainerStarted","Data":"cd1d0551cd9b7f80a8c7ef68c4f0d2b35cdb69acebd8bb644e9f70b0de249f04"} Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.213483 4980 generic.go:334] "Generic (PLEG): container finished" podID="edc1e23f-0cd2-4ab5-be99-0e90aa809529" containerID="173ae5564180323691b7e425fea84e2d8ead1219969a8bb99bd1725088ba4b04" exitCode=0 Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.213536 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h6xq" event={"ID":"edc1e23f-0cd2-4ab5-be99-0e90aa809529","Type":"ContainerDied","Data":"173ae5564180323691b7e425fea84e2d8ead1219969a8bb99bd1725088ba4b04"} Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.373213 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qcz5z"] Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.374190 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcz5z" Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.379336 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.387773 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qcz5z"] Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.523694 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acd8b20e-5d5b-4f22-b29e-109b6e039ad9-utilities\") pod \"certified-operators-qcz5z\" (UID: \"acd8b20e-5d5b-4f22-b29e-109b6e039ad9\") " pod="openshift-marketplace/certified-operators-qcz5z" Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.523803 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acd8b20e-5d5b-4f22-b29e-109b6e039ad9-catalog-content\") pod \"certified-operators-qcz5z\" (UID: \"acd8b20e-5d5b-4f22-b29e-109b6e039ad9\") " pod="openshift-marketplace/certified-operators-qcz5z" Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.523843 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vhtz\" (UniqueName: \"kubernetes.io/projected/acd8b20e-5d5b-4f22-b29e-109b6e039ad9-kube-api-access-2vhtz\") pod \"certified-operators-qcz5z\" (UID: \"acd8b20e-5d5b-4f22-b29e-109b6e039ad9\") " pod="openshift-marketplace/certified-operators-qcz5z" Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.625579 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acd8b20e-5d5b-4f22-b29e-109b6e039ad9-utilities\") pod \"certified-operators-qcz5z\" (UID: \"acd8b20e-5d5b-4f22-b29e-109b6e039ad9\") " pod="openshift-marketplace/certified-operators-qcz5z" Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.625706 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acd8b20e-5d5b-4f22-b29e-109b6e039ad9-catalog-content\") pod \"certified-operators-qcz5z\" (UID: \"acd8b20e-5d5b-4f22-b29e-109b6e039ad9\") " pod="openshift-marketplace/certified-operators-qcz5z" Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.625757 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vhtz\" (UniqueName: \"kubernetes.io/projected/acd8b20e-5d5b-4f22-b29e-109b6e039ad9-kube-api-access-2vhtz\") pod \"certified-operators-qcz5z\" (UID: \"acd8b20e-5d5b-4f22-b29e-109b6e039ad9\") " pod="openshift-marketplace/certified-operators-qcz5z" Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.626471 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acd8b20e-5d5b-4f22-b29e-109b6e039ad9-utilities\") pod \"certified-operators-qcz5z\" (UID: \"acd8b20e-5d5b-4f22-b29e-109b6e039ad9\") " pod="openshift-marketplace/certified-operators-qcz5z" Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.626640 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acd8b20e-5d5b-4f22-b29e-109b6e039ad9-catalog-content\") pod \"certified-operators-qcz5z\" (UID: \"acd8b20e-5d5b-4f22-b29e-109b6e039ad9\") " pod="openshift-marketplace/certified-operators-qcz5z" Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.661980 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vhtz\" (UniqueName: \"kubernetes.io/projected/acd8b20e-5d5b-4f22-b29e-109b6e039ad9-kube-api-access-2vhtz\") pod \"certified-operators-qcz5z\" (UID: \"acd8b20e-5d5b-4f22-b29e-109b6e039ad9\") " pod="openshift-marketplace/certified-operators-qcz5z" Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.705615 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcz5z" Jan 07 03:38:26 crc kubenswrapper[4980]: I0107 03:38:26.979907 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qcz5z"] Jan 07 03:38:26 crc kubenswrapper[4980]: W0107 03:38:26.987753 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacd8b20e_5d5b_4f22_b29e_109b6e039ad9.slice/crio-620ccb16263ab5d24131811fae23426931a4ffc58ddbf4d77cf6dfebcf898462 WatchSource:0}: Error finding container 620ccb16263ab5d24131811fae23426931a4ffc58ddbf4d77cf6dfebcf898462: Status 404 returned error can't find the container with id 620ccb16263ab5d24131811fae23426931a4ffc58ddbf4d77cf6dfebcf898462 Jan 07 03:38:27 crc kubenswrapper[4980]: I0107 03:38:27.221398 4980 generic.go:334] "Generic (PLEG): container finished" podID="acd8b20e-5d5b-4f22-b29e-109b6e039ad9" containerID="e533aaf588ec56e8bd0d7e1bd96566353a3f8f9ea236ae243803b95d2fd9c750" exitCode=0 Jan 07 03:38:27 crc kubenswrapper[4980]: I0107 03:38:27.221437 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcz5z" event={"ID":"acd8b20e-5d5b-4f22-b29e-109b6e039ad9","Type":"ContainerDied","Data":"e533aaf588ec56e8bd0d7e1bd96566353a3f8f9ea236ae243803b95d2fd9c750"} Jan 07 03:38:27 crc kubenswrapper[4980]: I0107 03:38:27.221461 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcz5z" event={"ID":"acd8b20e-5d5b-4f22-b29e-109b6e039ad9","Type":"ContainerStarted","Data":"620ccb16263ab5d24131811fae23426931a4ffc58ddbf4d77cf6dfebcf898462"} Jan 07 03:38:27 crc kubenswrapper[4980]: I0107 03:38:27.774305 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pgfdc"] Jan 07 03:38:27 crc kubenswrapper[4980]: I0107 03:38:27.775659 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgfdc" Jan 07 03:38:27 crc kubenswrapper[4980]: I0107 03:38:27.778046 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 07 03:38:27 crc kubenswrapper[4980]: I0107 03:38:27.785521 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pgfdc"] Jan 07 03:38:27 crc kubenswrapper[4980]: I0107 03:38:27.946669 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d57ca4dc-5247-4b77-808b-c7e095b4b167-utilities\") pod \"community-operators-pgfdc\" (UID: \"d57ca4dc-5247-4b77-808b-c7e095b4b167\") " pod="openshift-marketplace/community-operators-pgfdc" Jan 07 03:38:27 crc kubenswrapper[4980]: I0107 03:38:27.946790 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9f2t\" (UniqueName: \"kubernetes.io/projected/d57ca4dc-5247-4b77-808b-c7e095b4b167-kube-api-access-d9f2t\") pod \"community-operators-pgfdc\" (UID: \"d57ca4dc-5247-4b77-808b-c7e095b4b167\") " pod="openshift-marketplace/community-operators-pgfdc" Jan 07 03:38:27 crc kubenswrapper[4980]: I0107 03:38:27.946830 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d57ca4dc-5247-4b77-808b-c7e095b4b167-catalog-content\") pod \"community-operators-pgfdc\" (UID: \"d57ca4dc-5247-4b77-808b-c7e095b4b167\") " pod="openshift-marketplace/community-operators-pgfdc" Jan 07 03:38:28 crc kubenswrapper[4980]: I0107 03:38:28.048357 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d57ca4dc-5247-4b77-808b-c7e095b4b167-utilities\") pod \"community-operators-pgfdc\" (UID: \"d57ca4dc-5247-4b77-808b-c7e095b4b167\") " pod="openshift-marketplace/community-operators-pgfdc" Jan 07 03:38:28 crc kubenswrapper[4980]: I0107 03:38:28.048444 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9f2t\" (UniqueName: \"kubernetes.io/projected/d57ca4dc-5247-4b77-808b-c7e095b4b167-kube-api-access-d9f2t\") pod \"community-operators-pgfdc\" (UID: \"d57ca4dc-5247-4b77-808b-c7e095b4b167\") " pod="openshift-marketplace/community-operators-pgfdc" Jan 07 03:38:28 crc kubenswrapper[4980]: I0107 03:38:28.048519 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d57ca4dc-5247-4b77-808b-c7e095b4b167-catalog-content\") pod \"community-operators-pgfdc\" (UID: \"d57ca4dc-5247-4b77-808b-c7e095b4b167\") " pod="openshift-marketplace/community-operators-pgfdc" Jan 07 03:38:28 crc kubenswrapper[4980]: I0107 03:38:28.049424 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d57ca4dc-5247-4b77-808b-c7e095b4b167-catalog-content\") pod \"community-operators-pgfdc\" (UID: \"d57ca4dc-5247-4b77-808b-c7e095b4b167\") " pod="openshift-marketplace/community-operators-pgfdc" Jan 07 03:38:28 crc kubenswrapper[4980]: I0107 03:38:28.049922 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d57ca4dc-5247-4b77-808b-c7e095b4b167-utilities\") pod \"community-operators-pgfdc\" (UID: \"d57ca4dc-5247-4b77-808b-c7e095b4b167\") " pod="openshift-marketplace/community-operators-pgfdc" Jan 07 03:38:28 crc kubenswrapper[4980]: I0107 03:38:28.081826 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9f2t\" (UniqueName: \"kubernetes.io/projected/d57ca4dc-5247-4b77-808b-c7e095b4b167-kube-api-access-d9f2t\") pod \"community-operators-pgfdc\" (UID: \"d57ca4dc-5247-4b77-808b-c7e095b4b167\") " pod="openshift-marketplace/community-operators-pgfdc" Jan 07 03:38:28 crc kubenswrapper[4980]: I0107 03:38:28.098408 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgfdc" Jan 07 03:38:28 crc kubenswrapper[4980]: I0107 03:38:28.238607 4980 generic.go:334] "Generic (PLEG): container finished" podID="edc1e23f-0cd2-4ab5-be99-0e90aa809529" containerID="8b8bc40212b1c5a8637b7c70a8905aeaf4b449dd297802cc62af983501f6aad8" exitCode=0 Jan 07 03:38:28 crc kubenswrapper[4980]: I0107 03:38:28.238918 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h6xq" event={"ID":"edc1e23f-0cd2-4ab5-be99-0e90aa809529","Type":"ContainerDied","Data":"8b8bc40212b1c5a8637b7c70a8905aeaf4b449dd297802cc62af983501f6aad8"} Jan 07 03:38:28 crc kubenswrapper[4980]: I0107 03:38:28.241609 4980 generic.go:334] "Generic (PLEG): container finished" podID="4a3ee567-8dae-40c8-9f42-6a01ec72b480" containerID="e959f04fa8d5ee05ca61af1c0e5ac3e4c68bc41be1a9afd1ddd63dae0f2229f9" exitCode=0 Jan 07 03:38:28 crc kubenswrapper[4980]: I0107 03:38:28.241630 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdt7j" event={"ID":"4a3ee567-8dae-40c8-9f42-6a01ec72b480","Type":"ContainerDied","Data":"e959f04fa8d5ee05ca61af1c0e5ac3e4c68bc41be1a9afd1ddd63dae0f2229f9"} Jan 07 03:38:28 crc kubenswrapper[4980]: I0107 03:38:28.380538 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pgfdc"] Jan 07 03:38:28 crc kubenswrapper[4980]: W0107 03:38:28.386917 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd57ca4dc_5247_4b77_808b_c7e095b4b167.slice/crio-d589daa8476e661805dea3fd248ac3a943b9e30269ef639c2a5cbb460d276bd2 WatchSource:0}: Error finding container d589daa8476e661805dea3fd248ac3a943b9e30269ef639c2a5cbb460d276bd2: Status 404 returned error can't find the container with id d589daa8476e661805dea3fd248ac3a943b9e30269ef639c2a5cbb460d276bd2 Jan 07 03:38:29 crc kubenswrapper[4980]: I0107 03:38:29.247489 4980 generic.go:334] "Generic (PLEG): container finished" podID="acd8b20e-5d5b-4f22-b29e-109b6e039ad9" containerID="2d9fcb1527ea4d0317c4cbccfc4cf5600f018ba4dee68861f6e83bc701759143" exitCode=0 Jan 07 03:38:29 crc kubenswrapper[4980]: I0107 03:38:29.247598 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcz5z" event={"ID":"acd8b20e-5d5b-4f22-b29e-109b6e039ad9","Type":"ContainerDied","Data":"2d9fcb1527ea4d0317c4cbccfc4cf5600f018ba4dee68861f6e83bc701759143"} Jan 07 03:38:29 crc kubenswrapper[4980]: I0107 03:38:29.249847 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdt7j" event={"ID":"4a3ee567-8dae-40c8-9f42-6a01ec72b480","Type":"ContainerStarted","Data":"b7fd805ac163151404da5762e2bb854d3ad5804f975972eea3b7bbfedcfcb949"} Jan 07 03:38:29 crc kubenswrapper[4980]: I0107 03:38:29.260935 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h6xq" event={"ID":"edc1e23f-0cd2-4ab5-be99-0e90aa809529","Type":"ContainerStarted","Data":"cedb1b15e8e44afd997af0f53f3b23a20e7b03dc9a6127646eebceac7fcbfbc8"} Jan 07 03:38:29 crc kubenswrapper[4980]: I0107 03:38:29.267936 4980 generic.go:334] "Generic (PLEG): container finished" podID="d57ca4dc-5247-4b77-808b-c7e095b4b167" containerID="c2fc2559e38030476775e5206514d9dbd45ce01e6eda8261c6ad5aa4daa7e76f" exitCode=0 Jan 07 03:38:29 crc kubenswrapper[4980]: I0107 03:38:29.267982 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgfdc" event={"ID":"d57ca4dc-5247-4b77-808b-c7e095b4b167","Type":"ContainerDied","Data":"c2fc2559e38030476775e5206514d9dbd45ce01e6eda8261c6ad5aa4daa7e76f"} Jan 07 03:38:29 crc kubenswrapper[4980]: I0107 03:38:29.268007 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgfdc" event={"ID":"d57ca4dc-5247-4b77-808b-c7e095b4b167","Type":"ContainerStarted","Data":"d589daa8476e661805dea3fd248ac3a943b9e30269ef639c2a5cbb460d276bd2"} Jan 07 03:38:29 crc kubenswrapper[4980]: I0107 03:38:29.308877 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5h6xq" podStartSLOduration=2.8719221619999997 podStartE2EDuration="5.308861212s" podCreationTimestamp="2026-01-07 03:38:24 +0000 UTC" firstStartedPulling="2026-01-07 03:38:26.217515917 +0000 UTC m=+352.783210692" lastFinishedPulling="2026-01-07 03:38:28.654454967 +0000 UTC m=+355.220149742" observedRunningTime="2026-01-07 03:38:29.30785317 +0000 UTC m=+355.873547905" watchObservedRunningTime="2026-01-07 03:38:29.308861212 +0000 UTC m=+355.874555937" Jan 07 03:38:29 crc kubenswrapper[4980]: I0107 03:38:29.325434 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qdt7j" podStartSLOduration=2.8213771420000002 podStartE2EDuration="5.325415247s" podCreationTimestamp="2026-01-07 03:38:24 +0000 UTC" firstStartedPulling="2026-01-07 03:38:26.214017234 +0000 UTC m=+352.779711999" lastFinishedPulling="2026-01-07 03:38:28.718055359 +0000 UTC m=+355.283750104" observedRunningTime="2026-01-07 03:38:29.321570032 +0000 UTC m=+355.887264767" watchObservedRunningTime="2026-01-07 03:38:29.325415247 +0000 UTC m=+355.891109982" Jan 07 03:38:30 crc kubenswrapper[4980]: I0107 03:38:30.277265 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcz5z" event={"ID":"acd8b20e-5d5b-4f22-b29e-109b6e039ad9","Type":"ContainerStarted","Data":"ad705a4ab13c01a283f601b0798218d04962b84d5d0832c19250c44b0ebf85a5"} Jan 07 03:38:30 crc kubenswrapper[4980]: I0107 03:38:30.280583 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgfdc" event={"ID":"d57ca4dc-5247-4b77-808b-c7e095b4b167","Type":"ContainerStarted","Data":"0a2b993f6347e0f434005050dbff73c996e58ee216a963c3d3b12f35ff4aba87"} Jan 07 03:38:30 crc kubenswrapper[4980]: I0107 03:38:30.327569 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qcz5z" podStartSLOduration=1.682076993 podStartE2EDuration="4.327533681s" podCreationTimestamp="2026-01-07 03:38:26 +0000 UTC" firstStartedPulling="2026-01-07 03:38:27.225134899 +0000 UTC m=+353.790829634" lastFinishedPulling="2026-01-07 03:38:29.870591547 +0000 UTC m=+356.436286322" observedRunningTime="2026-01-07 03:38:30.306209842 +0000 UTC m=+356.871904597" watchObservedRunningTime="2026-01-07 03:38:30.327533681 +0000 UTC m=+356.893228426" Jan 07 03:38:31 crc kubenswrapper[4980]: I0107 03:38:31.291601 4980 generic.go:334] "Generic (PLEG): container finished" podID="d57ca4dc-5247-4b77-808b-c7e095b4b167" containerID="0a2b993f6347e0f434005050dbff73c996e58ee216a963c3d3b12f35ff4aba87" exitCode=0 Jan 07 03:38:31 crc kubenswrapper[4980]: I0107 03:38:31.291674 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgfdc" event={"ID":"d57ca4dc-5247-4b77-808b-c7e095b4b167","Type":"ContainerDied","Data":"0a2b993f6347e0f434005050dbff73c996e58ee216a963c3d3b12f35ff4aba87"} Jan 07 03:38:32 crc kubenswrapper[4980]: I0107 03:38:32.299657 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgfdc" event={"ID":"d57ca4dc-5247-4b77-808b-c7e095b4b167","Type":"ContainerStarted","Data":"a39a45319680a337c77bb6d50a04353116adb7d8ca69c185e7602ae48f2ae771"} Jan 07 03:38:32 crc kubenswrapper[4980]: I0107 03:38:32.330828 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pgfdc" podStartSLOduration=2.777147712 podStartE2EDuration="5.330809708s" podCreationTimestamp="2026-01-07 03:38:27 +0000 UTC" firstStartedPulling="2026-01-07 03:38:29.269079899 +0000 UTC m=+355.834774634" lastFinishedPulling="2026-01-07 03:38:31.822741855 +0000 UTC m=+358.388436630" observedRunningTime="2026-01-07 03:38:32.329025261 +0000 UTC m=+358.894720006" watchObservedRunningTime="2026-01-07 03:38:32.330809708 +0000 UTC m=+358.896504453" Jan 07 03:38:34 crc kubenswrapper[4980]: I0107 03:38:34.931664 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5h6xq" Jan 07 03:38:34 crc kubenswrapper[4980]: I0107 03:38:34.932769 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5h6xq" Jan 07 03:38:35 crc kubenswrapper[4980]: I0107 03:38:35.103932 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qdt7j" Jan 07 03:38:35 crc kubenswrapper[4980]: I0107 03:38:35.104595 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qdt7j" Jan 07 03:38:35 crc kubenswrapper[4980]: I0107 03:38:35.174099 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qdt7j" Jan 07 03:38:35 crc kubenswrapper[4980]: I0107 03:38:35.391380 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qdt7j" Jan 07 03:38:36 crc kubenswrapper[4980]: I0107 03:38:36.012089 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5h6xq" podUID="edc1e23f-0cd2-4ab5-be99-0e90aa809529" containerName="registry-server" probeResult="failure" output=< Jan 07 03:38:36 crc kubenswrapper[4980]: timeout: failed to connect service ":50051" within 1s Jan 07 03:38:36 crc kubenswrapper[4980]: > Jan 07 03:38:36 crc kubenswrapper[4980]: I0107 03:38:36.543050 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:38:36 crc kubenswrapper[4980]: I0107 03:38:36.543138 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:38:36 crc kubenswrapper[4980]: I0107 03:38:36.705951 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qcz5z" Jan 07 03:38:36 crc kubenswrapper[4980]: I0107 03:38:36.706046 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qcz5z" Jan 07 03:38:36 crc kubenswrapper[4980]: I0107 03:38:36.777948 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qcz5z" Jan 07 03:38:37 crc kubenswrapper[4980]: I0107 03:38:37.399701 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qcz5z" Jan 07 03:38:38 crc kubenswrapper[4980]: I0107 03:38:38.099666 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pgfdc" Jan 07 03:38:38 crc kubenswrapper[4980]: I0107 03:38:38.099862 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pgfdc" Jan 07 03:38:38 crc kubenswrapper[4980]: I0107 03:38:38.168182 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pgfdc" Jan 07 03:38:38 crc kubenswrapper[4980]: I0107 03:38:38.384538 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pgfdc" Jan 07 03:38:45 crc kubenswrapper[4980]: I0107 03:38:45.003748 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5h6xq" Jan 07 03:38:45 crc kubenswrapper[4980]: I0107 03:38:45.081836 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5h6xq" Jan 07 03:38:50 crc kubenswrapper[4980]: I0107 03:38:50.643760 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" podUID="d4318c07-8e55-4555-bebb-297c5bb68e73" containerName="registry" containerID="cri-o://f2a7d9f5a36b5c3ff06194df911da162c3e195fbf79bac38f7530202ce1a8171" gracePeriod=30 Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.005432 4980 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-nql9v container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.30:5000/healthz\": dial tcp 10.217.0.30:5000: connect: connection refused" start-of-body= Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.006036 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" podUID="d4318c07-8e55-4555-bebb-297c5bb68e73" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.30:5000/healthz\": dial tcp 10.217.0.30:5000: connect: connection refused" Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.428594 4980 generic.go:334] "Generic (PLEG): container finished" podID="d4318c07-8e55-4555-bebb-297c5bb68e73" containerID="f2a7d9f5a36b5c3ff06194df911da162c3e195fbf79bac38f7530202ce1a8171" exitCode=0 Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.428626 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" event={"ID":"d4318c07-8e55-4555-bebb-297c5bb68e73","Type":"ContainerDied","Data":"f2a7d9f5a36b5c3ff06194df911da162c3e195fbf79bac38f7530202ce1a8171"} Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.595977 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.627148 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4318c07-8e55-4555-bebb-297c5bb68e73-installation-pull-secrets\") pod \"d4318c07-8e55-4555-bebb-297c5bb68e73\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.627199 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4318c07-8e55-4555-bebb-297c5bb68e73-bound-sa-token\") pod \"d4318c07-8e55-4555-bebb-297c5bb68e73\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.627255 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4318c07-8e55-4555-bebb-297c5bb68e73-ca-trust-extracted\") pod \"d4318c07-8e55-4555-bebb-297c5bb68e73\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.627284 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4318c07-8e55-4555-bebb-297c5bb68e73-registry-tls\") pod \"d4318c07-8e55-4555-bebb-297c5bb68e73\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.627304 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wgss\" (UniqueName: \"kubernetes.io/projected/d4318c07-8e55-4555-bebb-297c5bb68e73-kube-api-access-2wgss\") pod \"d4318c07-8e55-4555-bebb-297c5bb68e73\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.627327 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4318c07-8e55-4555-bebb-297c5bb68e73-registry-certificates\") pod \"d4318c07-8e55-4555-bebb-297c5bb68e73\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.627465 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d4318c07-8e55-4555-bebb-297c5bb68e73\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.627488 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4318c07-8e55-4555-bebb-297c5bb68e73-trusted-ca\") pod \"d4318c07-8e55-4555-bebb-297c5bb68e73\" (UID: \"d4318c07-8e55-4555-bebb-297c5bb68e73\") " Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.628789 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4318c07-8e55-4555-bebb-297c5bb68e73-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d4318c07-8e55-4555-bebb-297c5bb68e73" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.629098 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4318c07-8e55-4555-bebb-297c5bb68e73-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d4318c07-8e55-4555-bebb-297c5bb68e73" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.638666 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4318c07-8e55-4555-bebb-297c5bb68e73-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d4318c07-8e55-4555-bebb-297c5bb68e73" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.638932 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4318c07-8e55-4555-bebb-297c5bb68e73-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d4318c07-8e55-4555-bebb-297c5bb68e73" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.639405 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4318c07-8e55-4555-bebb-297c5bb68e73-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d4318c07-8e55-4555-bebb-297c5bb68e73" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.649947 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4318c07-8e55-4555-bebb-297c5bb68e73-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d4318c07-8e55-4555-bebb-297c5bb68e73" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.650922 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d4318c07-8e55-4555-bebb-297c5bb68e73" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.653686 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4318c07-8e55-4555-bebb-297c5bb68e73-kube-api-access-2wgss" (OuterVolumeSpecName: "kube-api-access-2wgss") pod "d4318c07-8e55-4555-bebb-297c5bb68e73" (UID: "d4318c07-8e55-4555-bebb-297c5bb68e73"). InnerVolumeSpecName "kube-api-access-2wgss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.728527 4980 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4318c07-8e55-4555-bebb-297c5bb68e73-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.728614 4980 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4318c07-8e55-4555-bebb-297c5bb68e73-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.728629 4980 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4318c07-8e55-4555-bebb-297c5bb68e73-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.728643 4980 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4318c07-8e55-4555-bebb-297c5bb68e73-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.728658 4980 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4318c07-8e55-4555-bebb-297c5bb68e73-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.728673 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wgss\" (UniqueName: \"kubernetes.io/projected/d4318c07-8e55-4555-bebb-297c5bb68e73-kube-api-access-2wgss\") on node \"crc\" DevicePath \"\"" Jan 07 03:38:51 crc kubenswrapper[4980]: I0107 03:38:51.728687 4980 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4318c07-8e55-4555-bebb-297c5bb68e73-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 07 03:38:52 crc kubenswrapper[4980]: I0107 03:38:52.439951 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" event={"ID":"d4318c07-8e55-4555-bebb-297c5bb68e73","Type":"ContainerDied","Data":"8c5059b2628b5c74f746db8aaa5228d57dc7a10fa88bad35c71178c3e7e00eef"} Jan 07 03:38:52 crc kubenswrapper[4980]: I0107 03:38:52.440031 4980 scope.go:117] "RemoveContainer" containerID="f2a7d9f5a36b5c3ff06194df911da162c3e195fbf79bac38f7530202ce1a8171" Jan 07 03:38:52 crc kubenswrapper[4980]: I0107 03:38:52.440096 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nql9v" Jan 07 03:38:52 crc kubenswrapper[4980]: I0107 03:38:52.470434 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nql9v"] Jan 07 03:38:52 crc kubenswrapper[4980]: I0107 03:38:52.478729 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nql9v"] Jan 07 03:38:53 crc kubenswrapper[4980]: I0107 03:38:53.750481 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4318c07-8e55-4555-bebb-297c5bb68e73" path="/var/lib/kubelet/pods/d4318c07-8e55-4555-bebb-297c5bb68e73/volumes" Jan 07 03:39:06 crc kubenswrapper[4980]: I0107 03:39:06.543256 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:39:06 crc kubenswrapper[4980]: I0107 03:39:06.544147 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:39:06 crc kubenswrapper[4980]: I0107 03:39:06.544251 4980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:39:06 crc kubenswrapper[4980]: I0107 03:39:06.545213 4980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"735d601985cf5f44c3448b356ab2c68e5af439ea67b0bab54864bd397960c698"} pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 03:39:06 crc kubenswrapper[4980]: I0107 03:39:06.545310 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" containerID="cri-o://735d601985cf5f44c3448b356ab2c68e5af439ea67b0bab54864bd397960c698" gracePeriod=600 Jan 07 03:39:07 crc kubenswrapper[4980]: I0107 03:39:07.554581 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerID="735d601985cf5f44c3448b356ab2c68e5af439ea67b0bab54864bd397960c698" exitCode=0 Jan 07 03:39:07 crc kubenswrapper[4980]: I0107 03:39:07.554876 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerDied","Data":"735d601985cf5f44c3448b356ab2c68e5af439ea67b0bab54864bd397960c698"} Jan 07 03:39:07 crc kubenswrapper[4980]: I0107 03:39:07.555065 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"b42f21736ffa02dccd45145f74074847a6300123f32573dcab50b9332e94e700"} Jan 07 03:39:07 crc kubenswrapper[4980]: I0107 03:39:07.555105 4980 scope.go:117] "RemoveContainer" containerID="d5461ea8d9948e1ff7a012dea05dfbf3ab5988f6b2ab0109be441dbfea33c66a" Jan 07 03:41:06 crc kubenswrapper[4980]: I0107 03:41:06.543092 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:41:06 crc kubenswrapper[4980]: I0107 03:41:06.543974 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:41:34 crc kubenswrapper[4980]: I0107 03:41:34.054412 4980 scope.go:117] "RemoveContainer" containerID="949f4a9573d52bd958dc0c073bde632161cb174152eee2fe8da757a608cab614" Jan 07 03:41:34 crc kubenswrapper[4980]: I0107 03:41:34.082979 4980 scope.go:117] "RemoveContainer" containerID="6018d287f164c0533c50b3a376b1308d0c74e9fdc3c8ccd52cd2fb8e29d72de5" Jan 07 03:41:36 crc kubenswrapper[4980]: I0107 03:41:36.543365 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:41:36 crc kubenswrapper[4980]: I0107 03:41:36.544132 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:42:06 crc kubenswrapper[4980]: I0107 03:42:06.543732 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:42:06 crc kubenswrapper[4980]: I0107 03:42:06.544701 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:42:06 crc kubenswrapper[4980]: I0107 03:42:06.544773 4980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:42:06 crc kubenswrapper[4980]: I0107 03:42:06.545901 4980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b42f21736ffa02dccd45145f74074847a6300123f32573dcab50b9332e94e700"} pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 03:42:06 crc kubenswrapper[4980]: I0107 03:42:06.546034 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" containerID="cri-o://b42f21736ffa02dccd45145f74074847a6300123f32573dcab50b9332e94e700" gracePeriod=600 Jan 07 03:42:06 crc kubenswrapper[4980]: I0107 03:42:06.845532 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerID="b42f21736ffa02dccd45145f74074847a6300123f32573dcab50b9332e94e700" exitCode=0 Jan 07 03:42:06 crc kubenswrapper[4980]: I0107 03:42:06.845635 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerDied","Data":"b42f21736ffa02dccd45145f74074847a6300123f32573dcab50b9332e94e700"} Jan 07 03:42:06 crc kubenswrapper[4980]: I0107 03:42:06.846062 4980 scope.go:117] "RemoveContainer" containerID="735d601985cf5f44c3448b356ab2c68e5af439ea67b0bab54864bd397960c698" Jan 07 03:42:07 crc kubenswrapper[4980]: I0107 03:42:07.857410 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"22f87e8413daf7843826baa261082b343285cfe845501a26b14ff6b1f2751cb0"} Jan 07 03:42:34 crc kubenswrapper[4980]: I0107 03:42:34.132688 4980 scope.go:117] "RemoveContainer" containerID="b0f3eb6126d4d0dfa937a45a70ce347cc1818a246db9077545188be1a27a393e" Jan 07 03:42:34 crc kubenswrapper[4980]: I0107 03:42:34.177437 4980 scope.go:117] "RemoveContainer" containerID="67352690022e49bcd22a0cdeaa7736e2f45bb4d902f2072638d88527f6878030" Jan 07 03:42:34 crc kubenswrapper[4980]: I0107 03:42:34.211692 4980 scope.go:117] "RemoveContainer" containerID="99797584c3f165e1ad63b5dc4b9af1ddd9d4fe4d4ea094d12343bda6070c895b" Jan 07 03:42:34 crc kubenswrapper[4980]: I0107 03:42:34.233797 4980 scope.go:117] "RemoveContainer" containerID="ee5ff5560ef50d344925ce4e2ce9811d0aea437332186422481ab0cab54490f8" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.330135 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dfqdk"] Jan 07 03:43:15 crc kubenswrapper[4980]: E0107 03:43:15.330915 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4318c07-8e55-4555-bebb-297c5bb68e73" containerName="registry" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.330928 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4318c07-8e55-4555-bebb-297c5bb68e73" containerName="registry" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.331018 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4318c07-8e55-4555-bebb-297c5bb68e73" containerName="registry" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.331377 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dfqdk" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.333374 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.333565 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.339042 4980 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-l2h82" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.342296 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dfqdk"] Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.363512 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-25nfw"] Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.364361 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-25nfw" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.367674 4980 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-dws5p" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.369609 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-vchrs"] Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.370403 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vchrs" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.371825 4980 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-4h5gt" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.376164 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-25nfw"] Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.379932 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vchrs"] Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.518081 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkkxw\" (UniqueName: \"kubernetes.io/projected/a5762e00-3e81-401d-8365-8d6791ecbf4f-kube-api-access-hkkxw\") pod \"cert-manager-858654f9db-vchrs\" (UID: \"a5762e00-3e81-401d-8365-8d6791ecbf4f\") " pod="cert-manager/cert-manager-858654f9db-vchrs" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.518188 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnr6p\" (UniqueName: \"kubernetes.io/projected/c7ebfede-1363-495d-b143-2e8db44394c0-kube-api-access-hnr6p\") pod \"cert-manager-cainjector-cf98fcc89-dfqdk\" (UID: \"c7ebfede-1363-495d-b143-2e8db44394c0\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dfqdk" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.518722 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wfbj\" (UniqueName: \"kubernetes.io/projected/59f9bc30-5a23-4161-95fa-68d941208670-kube-api-access-5wfbj\") pod \"cert-manager-webhook-687f57d79b-25nfw\" (UID: \"59f9bc30-5a23-4161-95fa-68d941208670\") " pod="cert-manager/cert-manager-webhook-687f57d79b-25nfw" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.620349 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wfbj\" (UniqueName: \"kubernetes.io/projected/59f9bc30-5a23-4161-95fa-68d941208670-kube-api-access-5wfbj\") pod \"cert-manager-webhook-687f57d79b-25nfw\" (UID: \"59f9bc30-5a23-4161-95fa-68d941208670\") " pod="cert-manager/cert-manager-webhook-687f57d79b-25nfw" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.620403 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkkxw\" (UniqueName: \"kubernetes.io/projected/a5762e00-3e81-401d-8365-8d6791ecbf4f-kube-api-access-hkkxw\") pod \"cert-manager-858654f9db-vchrs\" (UID: \"a5762e00-3e81-401d-8365-8d6791ecbf4f\") " pod="cert-manager/cert-manager-858654f9db-vchrs" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.620466 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnr6p\" (UniqueName: \"kubernetes.io/projected/c7ebfede-1363-495d-b143-2e8db44394c0-kube-api-access-hnr6p\") pod \"cert-manager-cainjector-cf98fcc89-dfqdk\" (UID: \"c7ebfede-1363-495d-b143-2e8db44394c0\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dfqdk" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.642489 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wfbj\" (UniqueName: \"kubernetes.io/projected/59f9bc30-5a23-4161-95fa-68d941208670-kube-api-access-5wfbj\") pod \"cert-manager-webhook-687f57d79b-25nfw\" (UID: \"59f9bc30-5a23-4161-95fa-68d941208670\") " pod="cert-manager/cert-manager-webhook-687f57d79b-25nfw" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.643373 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkkxw\" (UniqueName: \"kubernetes.io/projected/a5762e00-3e81-401d-8365-8d6791ecbf4f-kube-api-access-hkkxw\") pod \"cert-manager-858654f9db-vchrs\" (UID: \"a5762e00-3e81-401d-8365-8d6791ecbf4f\") " pod="cert-manager/cert-manager-858654f9db-vchrs" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.644135 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnr6p\" (UniqueName: \"kubernetes.io/projected/c7ebfede-1363-495d-b143-2e8db44394c0-kube-api-access-hnr6p\") pod \"cert-manager-cainjector-cf98fcc89-dfqdk\" (UID: \"c7ebfede-1363-495d-b143-2e8db44394c0\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dfqdk" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.657100 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dfqdk" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.684937 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-25nfw" Jan 07 03:43:15 crc kubenswrapper[4980]: I0107 03:43:15.689864 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vchrs" Jan 07 03:43:16 crc kubenswrapper[4980]: I0107 03:43:16.191211 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dfqdk"] Jan 07 03:43:16 crc kubenswrapper[4980]: I0107 03:43:16.207183 4980 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 03:43:16 crc kubenswrapper[4980]: I0107 03:43:16.221823 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-25nfw"] Jan 07 03:43:16 crc kubenswrapper[4980]: W0107 03:43:16.226638 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59f9bc30_5a23_4161_95fa_68d941208670.slice/crio-8d11b37ce7001eb34423a96eb71c53de7a1daedd68450bdfe29fb98a3c007395 WatchSource:0}: Error finding container 8d11b37ce7001eb34423a96eb71c53de7a1daedd68450bdfe29fb98a3c007395: Status 404 returned error can't find the container with id 8d11b37ce7001eb34423a96eb71c53de7a1daedd68450bdfe29fb98a3c007395 Jan 07 03:43:16 crc kubenswrapper[4980]: I0107 03:43:16.233032 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vchrs"] Jan 07 03:43:16 crc kubenswrapper[4980]: W0107 03:43:16.238697 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5762e00_3e81_401d_8365_8d6791ecbf4f.slice/crio-4eb0976c0d9b7bc228f0ff119e2ad04f824875e30a32ea171667a9c182f7f437 WatchSource:0}: Error finding container 4eb0976c0d9b7bc228f0ff119e2ad04f824875e30a32ea171667a9c182f7f437: Status 404 returned error can't find the container with id 4eb0976c0d9b7bc228f0ff119e2ad04f824875e30a32ea171667a9c182f7f437 Jan 07 03:43:16 crc kubenswrapper[4980]: I0107 03:43:16.352889 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dfqdk" event={"ID":"c7ebfede-1363-495d-b143-2e8db44394c0","Type":"ContainerStarted","Data":"107cc3aa5e6c5a9372a2424230cac3907da39bc63f44a1bcdbc0d8dd2995af8e"} Jan 07 03:43:16 crc kubenswrapper[4980]: I0107 03:43:16.355151 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-25nfw" event={"ID":"59f9bc30-5a23-4161-95fa-68d941208670","Type":"ContainerStarted","Data":"8d11b37ce7001eb34423a96eb71c53de7a1daedd68450bdfe29fb98a3c007395"} Jan 07 03:43:16 crc kubenswrapper[4980]: I0107 03:43:16.356101 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vchrs" event={"ID":"a5762e00-3e81-401d-8365-8d6791ecbf4f","Type":"ContainerStarted","Data":"4eb0976c0d9b7bc228f0ff119e2ad04f824875e30a32ea171667a9c182f7f437"} Jan 07 03:43:21 crc kubenswrapper[4980]: I0107 03:43:21.395078 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vchrs" event={"ID":"a5762e00-3e81-401d-8365-8d6791ecbf4f","Type":"ContainerStarted","Data":"412880dd61f696d46edd0739366eb2f5ca5afcaf5493f12c3adabe25e9a6b96a"} Jan 07 03:43:21 crc kubenswrapper[4980]: I0107 03:43:21.397483 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dfqdk" event={"ID":"c7ebfede-1363-495d-b143-2e8db44394c0","Type":"ContainerStarted","Data":"c82bc42e16df10f1d0666922d146bcc4df65aed6fe8f1cf7ee25c79428496c13"} Jan 07 03:43:21 crc kubenswrapper[4980]: I0107 03:43:21.399855 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-25nfw" event={"ID":"59f9bc30-5a23-4161-95fa-68d941208670","Type":"ContainerStarted","Data":"6ecf76173699f3370e4fb46bbf84bbcdd9a9a789d7ed0138895c189885fc36c8"} Jan 07 03:43:21 crc kubenswrapper[4980]: I0107 03:43:21.400087 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-25nfw" Jan 07 03:43:21 crc kubenswrapper[4980]: I0107 03:43:21.418350 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-vchrs" podStartSLOduration=2.3514426840000002 podStartE2EDuration="6.418326834s" podCreationTimestamp="2026-01-07 03:43:15 +0000 UTC" firstStartedPulling="2026-01-07 03:43:16.241976536 +0000 UTC m=+642.807671271" lastFinishedPulling="2026-01-07 03:43:20.308860676 +0000 UTC m=+646.874555421" observedRunningTime="2026-01-07 03:43:21.417099236 +0000 UTC m=+647.982794041" watchObservedRunningTime="2026-01-07 03:43:21.418326834 +0000 UTC m=+647.984021609" Jan 07 03:43:21 crc kubenswrapper[4980]: I0107 03:43:21.451328 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-25nfw" podStartSLOduration=2.419017929 podStartE2EDuration="6.451296987s" podCreationTimestamp="2026-01-07 03:43:15 +0000 UTC" firstStartedPulling="2026-01-07 03:43:16.228694517 +0000 UTC m=+642.794389262" lastFinishedPulling="2026-01-07 03:43:20.260973545 +0000 UTC m=+646.826668320" observedRunningTime="2026-01-07 03:43:21.445722595 +0000 UTC m=+648.011417420" watchObservedRunningTime="2026-01-07 03:43:21.451296987 +0000 UTC m=+648.016991762" Jan 07 03:43:21 crc kubenswrapper[4980]: I0107 03:43:21.488093 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dfqdk" podStartSLOduration=2.450117824 podStartE2EDuration="6.488059156s" podCreationTimestamp="2026-01-07 03:43:15 +0000 UTC" firstStartedPulling="2026-01-07 03:43:16.206838636 +0000 UTC m=+642.772533381" lastFinishedPulling="2026-01-07 03:43:20.244779968 +0000 UTC m=+646.810474713" observedRunningTime="2026-01-07 03:43:21.475763738 +0000 UTC m=+648.041458543" watchObservedRunningTime="2026-01-07 03:43:21.488059156 +0000 UTC m=+648.053753911" Jan 07 03:43:25 crc kubenswrapper[4980]: I0107 03:43:25.370607 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5n7sj"] Jan 07 03:43:25 crc kubenswrapper[4980]: I0107 03:43:25.373627 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovn-controller" containerID="cri-o://6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788" gracePeriod=30 Jan 07 03:43:25 crc kubenswrapper[4980]: I0107 03:43:25.373749 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="nbdb" containerID="cri-o://7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8" gracePeriod=30 Jan 07 03:43:25 crc kubenswrapper[4980]: I0107 03:43:25.373907 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="sbdb" containerID="cri-o://da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067" gracePeriod=30 Jan 07 03:43:25 crc kubenswrapper[4980]: I0107 03:43:25.374092 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="kube-rbac-proxy-node" containerID="cri-o://506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3" gracePeriod=30 Jan 07 03:43:25 crc kubenswrapper[4980]: I0107 03:43:25.374191 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovn-acl-logging" containerID="cri-o://f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10" gracePeriod=30 Jan 07 03:43:25 crc kubenswrapper[4980]: I0107 03:43:25.374241 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="northd" containerID="cri-o://c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4" gracePeriod=30 Jan 07 03:43:25 crc kubenswrapper[4980]: I0107 03:43:25.374365 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983" gracePeriod=30 Jan 07 03:43:25 crc kubenswrapper[4980]: I0107 03:43:25.419198 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovnkube-controller" containerID="cri-o://2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd" gracePeriod=30 Jan 07 03:43:25 crc kubenswrapper[4980]: I0107 03:43:25.689256 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-25nfw" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.141697 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovnkube-controller/3.log" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.146217 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovn-acl-logging/0.log" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.147086 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovn-controller/0.log" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.147891 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.239095 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7vdkm"] Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.239486 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovnkube-controller" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.239516 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovnkube-controller" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.239536 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovnkube-controller" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.239549 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovnkube-controller" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.239594 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovnkube-controller" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.239607 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovnkube-controller" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.239626 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovnkube-controller" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.239638 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovnkube-controller" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.239658 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovn-acl-logging" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.239670 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovn-acl-logging" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.239684 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="northd" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.239696 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="northd" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.239712 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="nbdb" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.239723 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="nbdb" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.239737 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="kubecfg-setup" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.239749 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="kubecfg-setup" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.239772 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="kube-rbac-proxy-node" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.239784 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="kube-rbac-proxy-node" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.239798 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="sbdb" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.239809 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="sbdb" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.239827 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovn-controller" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.239839 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovn-controller" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.239858 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="kube-rbac-proxy-ovn-metrics" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.239873 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="kube-rbac-proxy-ovn-metrics" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.240041 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovn-acl-logging" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.240060 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovnkube-controller" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.240078 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="kube-rbac-proxy-node" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.240090 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovnkube-controller" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.240105 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovn-controller" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.240120 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovnkube-controller" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.240134 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="sbdb" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.240153 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="northd" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.240166 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="kube-rbac-proxy-ovn-metrics" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.240181 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="nbdb" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.240340 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovnkube-controller" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.240354 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovnkube-controller" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.240528 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovnkube-controller" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.240547 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" containerName="ovnkube-controller" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.243428 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.316686 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-run-netns\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.316806 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6c962a95-c8ed-4d65-810e-1da967416c06-ovn-node-metrics-cert\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.316855 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-etc-openvswitch\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.316889 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6c962a95-c8ed-4d65-810e-1da967416c06-ovnkube-config\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.316935 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6c962a95-c8ed-4d65-810e-1da967416c06-env-overrides\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.316985 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6c962a95-c8ed-4d65-810e-1da967416c06-ovnkube-script-lib\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.317019 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-kubelet\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.317059 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-run-systemd\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.317089 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-log-socket\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.317117 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-cni-bin\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.317150 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-var-lib-openvswitch\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.317179 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-node-log\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.317248 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-slash\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.317278 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-var-lib-cni-networks-ovn-kubernetes\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.317310 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-cni-netd\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.317368 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-run-openvswitch\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.317395 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-systemd-units\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.317432 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-run-ovn\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.317462 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-run-ovn-kubernetes\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.317501 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzxmn\" (UniqueName: \"kubernetes.io/projected/6c962a95-c8ed-4d65-810e-1da967416c06-kube-api-access-xzxmn\") pod \"6c962a95-c8ed-4d65-810e-1da967416c06\" (UID: \"6c962a95-c8ed-4d65-810e-1da967416c06\") " Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.316872 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.318130 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.318185 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.318711 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.318803 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-log-socket" (OuterVolumeSpecName: "log-socket") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.318844 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.318881 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.318918 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-node-log" (OuterVolumeSpecName: "node-log") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.318954 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-slash" (OuterVolumeSpecName: "host-slash") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.318991 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.319031 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.319068 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.319108 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.319185 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.319798 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c962a95-c8ed-4d65-810e-1da967416c06-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.319900 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c962a95-c8ed-4d65-810e-1da967416c06-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.319935 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c962a95-c8ed-4d65-810e-1da967416c06-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.327176 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c962a95-c8ed-4d65-810e-1da967416c06-kube-api-access-xzxmn" (OuterVolumeSpecName: "kube-api-access-xzxmn") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "kube-api-access-xzxmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.328986 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c962a95-c8ed-4d65-810e-1da967416c06-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.348202 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "6c962a95-c8ed-4d65-810e-1da967416c06" (UID: "6c962a95-c8ed-4d65-810e-1da967416c06"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.419302 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d027894-c896-482a-8524-0c8221089a45-ovnkube-config\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.419378 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-slash\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.419412 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d027894-c896-482a-8524-0c8221089a45-ovn-node-metrics-cert\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.419482 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d027894-c896-482a-8524-0c8221089a45-env-overrides\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.419525 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.419598 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-systemd-units\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.419635 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-run-ovn-kubernetes\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.419666 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2d027894-c896-482a-8524-0c8221089a45-ovnkube-script-lib\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.419693 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-run-netns\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.419726 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-run-systemd\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.419764 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-etc-openvswitch\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.419801 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn7vq\" (UniqueName: \"kubernetes.io/projected/2d027894-c896-482a-8524-0c8221089a45-kube-api-access-sn7vq\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.419858 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-kubelet\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.419886 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-var-lib-openvswitch\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.419914 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-node-log\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.419947 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-run-ovn\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.419976 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-cni-bin\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420008 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-cni-netd\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420038 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-run-openvswitch\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420066 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-log-socket\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420134 4980 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420155 4980 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-log-socket\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420174 4980 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420195 4980 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420214 4980 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-node-log\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420233 4980 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-slash\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420251 4980 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420269 4980 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420287 4980 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420304 4980 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420321 4980 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420338 4980 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420356 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzxmn\" (UniqueName: \"kubernetes.io/projected/6c962a95-c8ed-4d65-810e-1da967416c06-kube-api-access-xzxmn\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420373 4980 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420390 4980 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6c962a95-c8ed-4d65-810e-1da967416c06-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420406 4980 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420423 4980 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6c962a95-c8ed-4d65-810e-1da967416c06-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420440 4980 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6c962a95-c8ed-4d65-810e-1da967416c06-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420457 4980 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6c962a95-c8ed-4d65-810e-1da967416c06-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.420474 4980 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6c962a95-c8ed-4d65-810e-1da967416c06-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.439067 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovnkube-controller/3.log" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.443182 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovn-acl-logging/0.log" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.444049 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5n7sj_6c962a95-c8ed-4d65-810e-1da967416c06/ovn-controller/0.log" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.444776 4980 generic.go:334] "Generic (PLEG): container finished" podID="6c962a95-c8ed-4d65-810e-1da967416c06" containerID="2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd" exitCode=0 Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.444823 4980 generic.go:334] "Generic (PLEG): container finished" podID="6c962a95-c8ed-4d65-810e-1da967416c06" containerID="da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067" exitCode=0 Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.444850 4980 generic.go:334] "Generic (PLEG): container finished" podID="6c962a95-c8ed-4d65-810e-1da967416c06" containerID="7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8" exitCode=0 Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.444873 4980 generic.go:334] "Generic (PLEG): container finished" podID="6c962a95-c8ed-4d65-810e-1da967416c06" containerID="c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4" exitCode=0 Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.444891 4980 generic.go:334] "Generic (PLEG): container finished" podID="6c962a95-c8ed-4d65-810e-1da967416c06" containerID="181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983" exitCode=0 Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.444909 4980 generic.go:334] "Generic (PLEG): container finished" podID="6c962a95-c8ed-4d65-810e-1da967416c06" containerID="506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3" exitCode=0 Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.444913 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.444934 4980 generic.go:334] "Generic (PLEG): container finished" podID="6c962a95-c8ed-4d65-810e-1da967416c06" containerID="f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10" exitCode=143 Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.444896 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerDied","Data":"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445006 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerDied","Data":"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445037 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerDied","Data":"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445061 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerDied","Data":"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445085 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerDied","Data":"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445092 4980 scope.go:117] "RemoveContainer" containerID="2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445105 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerDied","Data":"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445281 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445306 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445318 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445329 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445341 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445352 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.444952 4980 generic.go:334] "Generic (PLEG): container finished" podID="6c962a95-c8ed-4d65-810e-1da967416c06" containerID="6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788" exitCode=143 Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445366 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445379 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445389 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445415 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerDied","Data":"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445439 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445453 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445465 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445477 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445488 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445499 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445511 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445525 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445539 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445591 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445616 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerDied","Data":"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445642 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445659 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445674 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445688 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445702 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445715 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445730 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445745 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445758 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445768 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445787 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5n7sj" event={"ID":"6c962a95-c8ed-4d65-810e-1da967416c06","Type":"ContainerDied","Data":"65e117d339419b5fcebfd45e8551dd0f29f30441331a6e21b5c52ff1ec1ea01e"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445805 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445818 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445832 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445846 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445858 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445869 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445880 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445891 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445902 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.445913 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.448883 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9ct5r_3b3e552e-9608-4577-86c3-5f7573ef22f6/kube-multus/2.log" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.449785 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9ct5r_3b3e552e-9608-4577-86c3-5f7573ef22f6/kube-multus/1.log" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.449892 4980 generic.go:334] "Generic (PLEG): container finished" podID="3b3e552e-9608-4577-86c3-5f7573ef22f6" containerID="009f39239d8a76a13491308f9e197bfa3b38115c0fa817eb2a9167194b0bb5a3" exitCode=2 Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.449979 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9ct5r" event={"ID":"3b3e552e-9608-4577-86c3-5f7573ef22f6","Type":"ContainerDied","Data":"009f39239d8a76a13491308f9e197bfa3b38115c0fa817eb2a9167194b0bb5a3"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.450017 4980 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16c5f8b24cd9dff6094a430c5eb226259115e22ab47ce38eeabdc1d675a5c643"} Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.451613 4980 scope.go:117] "RemoveContainer" containerID="009f39239d8a76a13491308f9e197bfa3b38115c0fa817eb2a9167194b0bb5a3" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.452277 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9ct5r_openshift-multus(3b3e552e-9608-4577-86c3-5f7573ef22f6)\"" pod="openshift-multus/multus-9ct5r" podUID="3b3e552e-9608-4577-86c3-5f7573ef22f6" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.478333 4980 scope.go:117] "RemoveContainer" containerID="c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.514750 4980 scope.go:117] "RemoveContainer" containerID="da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.518844 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5n7sj"] Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.521636 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d027894-c896-482a-8524-0c8221089a45-env-overrides\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.521713 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.521753 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-systemd-units\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.521794 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-run-ovn-kubernetes\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.521827 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2d027894-c896-482a-8524-0c8221089a45-ovnkube-script-lib\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.521859 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-run-netns\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522014 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-run-systemd\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522069 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-systemd-units\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522071 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-etc-openvswitch\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522129 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-etc-openvswitch\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522180 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-run-ovn-kubernetes\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522286 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-run-netns\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522313 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-run-systemd\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522337 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522408 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn7vq\" (UniqueName: \"kubernetes.io/projected/2d027894-c896-482a-8524-0c8221089a45-kube-api-access-sn7vq\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522488 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-kubelet\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522513 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-var-lib-openvswitch\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522534 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-node-log\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522587 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-run-ovn\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522608 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-cni-bin\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522638 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-cni-netd\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522662 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-run-openvswitch\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522669 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-var-lib-openvswitch\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522689 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-run-ovn\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522682 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-log-socket\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522706 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-log-socket\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522728 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-kubelet\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522748 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-cni-bin\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522761 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-run-openvswitch\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522782 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-node-log\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522782 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d027894-c896-482a-8524-0c8221089a45-ovnkube-config\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522821 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-slash\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522847 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d027894-c896-482a-8524-0c8221089a45-ovn-node-metrics-cert\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522870 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-slash\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.522788 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d027894-c896-482a-8524-0c8221089a45-host-cni-netd\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.523149 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d027894-c896-482a-8524-0c8221089a45-env-overrides\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.523339 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2d027894-c896-482a-8524-0c8221089a45-ovnkube-script-lib\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.523505 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d027894-c896-482a-8524-0c8221089a45-ovnkube-config\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.524532 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5n7sj"] Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.528052 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d027894-c896-482a-8524-0c8221089a45-ovn-node-metrics-cert\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.550746 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn7vq\" (UniqueName: \"kubernetes.io/projected/2d027894-c896-482a-8524-0c8221089a45-kube-api-access-sn7vq\") pod \"ovnkube-node-7vdkm\" (UID: \"2d027894-c896-482a-8524-0c8221089a45\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.570975 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.589243 4980 scope.go:117] "RemoveContainer" containerID="7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.634519 4980 scope.go:117] "RemoveContainer" containerID="c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.653818 4980 scope.go:117] "RemoveContainer" containerID="181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.673594 4980 scope.go:117] "RemoveContainer" containerID="506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.704371 4980 scope.go:117] "RemoveContainer" containerID="f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.728778 4980 scope.go:117] "RemoveContainer" containerID="6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.752333 4980 scope.go:117] "RemoveContainer" containerID="10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.778646 4980 scope.go:117] "RemoveContainer" containerID="2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.779198 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd\": container with ID starting with 2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd not found: ID does not exist" containerID="2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.779273 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd"} err="failed to get container status \"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd\": rpc error: code = NotFound desc = could not find container \"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd\": container with ID starting with 2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.779302 4980 scope.go:117] "RemoveContainer" containerID="c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.780031 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\": container with ID starting with c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac not found: ID does not exist" containerID="c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.780070 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac"} err="failed to get container status \"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\": rpc error: code = NotFound desc = could not find container \"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\": container with ID starting with c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.780117 4980 scope.go:117] "RemoveContainer" containerID="da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.780650 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\": container with ID starting with da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067 not found: ID does not exist" containerID="da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.780679 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067"} err="failed to get container status \"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\": rpc error: code = NotFound desc = could not find container \"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\": container with ID starting with da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.780721 4980 scope.go:117] "RemoveContainer" containerID="7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.781072 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\": container with ID starting with 7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8 not found: ID does not exist" containerID="7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.781103 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8"} err="failed to get container status \"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\": rpc error: code = NotFound desc = could not find container \"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\": container with ID starting with 7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.781123 4980 scope.go:117] "RemoveContainer" containerID="c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.784495 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\": container with ID starting with c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4 not found: ID does not exist" containerID="c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.784533 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4"} err="failed to get container status \"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\": rpc error: code = NotFound desc = could not find container \"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\": container with ID starting with c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.784595 4980 scope.go:117] "RemoveContainer" containerID="181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.785148 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\": container with ID starting with 181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983 not found: ID does not exist" containerID="181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.785197 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983"} err="failed to get container status \"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\": rpc error: code = NotFound desc = could not find container \"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\": container with ID starting with 181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.785226 4980 scope.go:117] "RemoveContainer" containerID="506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.785743 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\": container with ID starting with 506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3 not found: ID does not exist" containerID="506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.785796 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3"} err="failed to get container status \"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\": rpc error: code = NotFound desc = could not find container \"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\": container with ID starting with 506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.785828 4980 scope.go:117] "RemoveContainer" containerID="f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.790189 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\": container with ID starting with f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10 not found: ID does not exist" containerID="f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.790244 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10"} err="failed to get container status \"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\": rpc error: code = NotFound desc = could not find container \"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\": container with ID starting with f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.790277 4980 scope.go:117] "RemoveContainer" containerID="6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.791147 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\": container with ID starting with 6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788 not found: ID does not exist" containerID="6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.791189 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788"} err="failed to get container status \"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\": rpc error: code = NotFound desc = could not find container \"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\": container with ID starting with 6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.791212 4980 scope.go:117] "RemoveContainer" containerID="10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86" Jan 07 03:43:26 crc kubenswrapper[4980]: E0107 03:43:26.791613 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\": container with ID starting with 10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86 not found: ID does not exist" containerID="10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.791665 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86"} err="failed to get container status \"10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\": rpc error: code = NotFound desc = could not find container \"10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\": container with ID starting with 10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.791684 4980 scope.go:117] "RemoveContainer" containerID="2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.792783 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd"} err="failed to get container status \"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd\": rpc error: code = NotFound desc = could not find container \"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd\": container with ID starting with 2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.792812 4980 scope.go:117] "RemoveContainer" containerID="c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.793179 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac"} err="failed to get container status \"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\": rpc error: code = NotFound desc = could not find container \"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\": container with ID starting with c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.793223 4980 scope.go:117] "RemoveContainer" containerID="da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.793625 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067"} err="failed to get container status \"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\": rpc error: code = NotFound desc = could not find container \"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\": container with ID starting with da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.793661 4980 scope.go:117] "RemoveContainer" containerID="7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.794029 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8"} err="failed to get container status \"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\": rpc error: code = NotFound desc = could not find container \"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\": container with ID starting with 7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.794062 4980 scope.go:117] "RemoveContainer" containerID="c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.794418 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4"} err="failed to get container status \"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\": rpc error: code = NotFound desc = could not find container \"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\": container with ID starting with c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.794454 4980 scope.go:117] "RemoveContainer" containerID="181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.794861 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983"} err="failed to get container status \"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\": rpc error: code = NotFound desc = could not find container \"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\": container with ID starting with 181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.794907 4980 scope.go:117] "RemoveContainer" containerID="506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.795367 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3"} err="failed to get container status \"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\": rpc error: code = NotFound desc = could not find container \"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\": container with ID starting with 506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.795395 4980 scope.go:117] "RemoveContainer" containerID="f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.795867 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10"} err="failed to get container status \"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\": rpc error: code = NotFound desc = could not find container \"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\": container with ID starting with f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.795923 4980 scope.go:117] "RemoveContainer" containerID="6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.796256 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788"} err="failed to get container status \"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\": rpc error: code = NotFound desc = could not find container \"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\": container with ID starting with 6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.796306 4980 scope.go:117] "RemoveContainer" containerID="10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.796898 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86"} err="failed to get container status \"10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\": rpc error: code = NotFound desc = could not find container \"10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\": container with ID starting with 10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.796927 4980 scope.go:117] "RemoveContainer" containerID="2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.797679 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd"} err="failed to get container status \"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd\": rpc error: code = NotFound desc = could not find container \"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd\": container with ID starting with 2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.797704 4980 scope.go:117] "RemoveContainer" containerID="c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.798065 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac"} err="failed to get container status \"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\": rpc error: code = NotFound desc = could not find container \"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\": container with ID starting with c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.798096 4980 scope.go:117] "RemoveContainer" containerID="da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.798516 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067"} err="failed to get container status \"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\": rpc error: code = NotFound desc = could not find container \"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\": container with ID starting with da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.798577 4980 scope.go:117] "RemoveContainer" containerID="7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.800175 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8"} err="failed to get container status \"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\": rpc error: code = NotFound desc = could not find container \"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\": container with ID starting with 7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.800204 4980 scope.go:117] "RemoveContainer" containerID="c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.800665 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4"} err="failed to get container status \"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\": rpc error: code = NotFound desc = could not find container \"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\": container with ID starting with c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.800692 4980 scope.go:117] "RemoveContainer" containerID="181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.801053 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983"} err="failed to get container status \"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\": rpc error: code = NotFound desc = could not find container \"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\": container with ID starting with 181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.801081 4980 scope.go:117] "RemoveContainer" containerID="506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.801374 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3"} err="failed to get container status \"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\": rpc error: code = NotFound desc = could not find container \"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\": container with ID starting with 506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.801395 4980 scope.go:117] "RemoveContainer" containerID="f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.801923 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10"} err="failed to get container status \"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\": rpc error: code = NotFound desc = could not find container \"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\": container with ID starting with f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.801951 4980 scope.go:117] "RemoveContainer" containerID="6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.802305 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788"} err="failed to get container status \"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\": rpc error: code = NotFound desc = could not find container \"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\": container with ID starting with 6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.802329 4980 scope.go:117] "RemoveContainer" containerID="10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.802709 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86"} err="failed to get container status \"10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\": rpc error: code = NotFound desc = could not find container \"10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\": container with ID starting with 10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.802733 4980 scope.go:117] "RemoveContainer" containerID="2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.803005 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd"} err="failed to get container status \"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd\": rpc error: code = NotFound desc = could not find container \"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd\": container with ID starting with 2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.803029 4980 scope.go:117] "RemoveContainer" containerID="c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.803532 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac"} err="failed to get container status \"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\": rpc error: code = NotFound desc = could not find container \"c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac\": container with ID starting with c77bdfde77cb12420615e2b75bc0604d4e7ae315cdf8540126d854b5e591bdac not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.803593 4980 scope.go:117] "RemoveContainer" containerID="da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.804014 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067"} err="failed to get container status \"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\": rpc error: code = NotFound desc = could not find container \"da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067\": container with ID starting with da5c247a30c60ee165f3557b2058226523b4228268da55c5c82290b5d9688067 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.804044 4980 scope.go:117] "RemoveContainer" containerID="7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.804505 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8"} err="failed to get container status \"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\": rpc error: code = NotFound desc = could not find container \"7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8\": container with ID starting with 7413c9583e060b37229d407098b8a9d58331f9c009d3d240c9bddf2afaa95fd8 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.804530 4980 scope.go:117] "RemoveContainer" containerID="c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.804901 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4"} err="failed to get container status \"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\": rpc error: code = NotFound desc = could not find container \"c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4\": container with ID starting with c97273fd2ea74810aaa99ad286ec191a5e5a243e2dd24ac3fa37ac7dd9dc42e4 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.804942 4980 scope.go:117] "RemoveContainer" containerID="181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.805337 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983"} err="failed to get container status \"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\": rpc error: code = NotFound desc = could not find container \"181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983\": container with ID starting with 181f71b294ab6589470fa568685fda9c38f1376ad97dc7b1ab8300c95ec4d983 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.805399 4980 scope.go:117] "RemoveContainer" containerID="506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.805815 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3"} err="failed to get container status \"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\": rpc error: code = NotFound desc = could not find container \"506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3\": container with ID starting with 506fdca12fefd659d5e1999a267c6710ce9a60ffc1f47fc824a8b8b13871c1c3 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.805844 4980 scope.go:117] "RemoveContainer" containerID="f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.806150 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10"} err="failed to get container status \"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\": rpc error: code = NotFound desc = could not find container \"f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10\": container with ID starting with f0ab791f95e12b05ab4093b1472a187461557ce7779216ec8a869c28b390ba10 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.806182 4980 scope.go:117] "RemoveContainer" containerID="6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.806590 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788"} err="failed to get container status \"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\": rpc error: code = NotFound desc = could not find container \"6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788\": container with ID starting with 6254c6e01da4377b1aaf51c4160e53797b3e339d71b4aec45c5ad962ce9b9788 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.806613 4980 scope.go:117] "RemoveContainer" containerID="10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.806922 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86"} err="failed to get container status \"10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\": rpc error: code = NotFound desc = could not find container \"10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86\": container with ID starting with 10c5a836654797bf5c57c78599c084cf7b1c8f0a62f1f9e458d5b1e738d9ec86 not found: ID does not exist" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.806948 4980 scope.go:117] "RemoveContainer" containerID="2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd" Jan 07 03:43:26 crc kubenswrapper[4980]: I0107 03:43:26.807253 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd"} err="failed to get container status \"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd\": rpc error: code = NotFound desc = could not find container \"2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd\": container with ID starting with 2793f6fb2cbed34edf36c0bc9ea167605fddb15a3173ed137c50f64e5faab3dd not found: ID does not exist" Jan 07 03:43:27 crc kubenswrapper[4980]: I0107 03:43:27.462375 4980 generic.go:334] "Generic (PLEG): container finished" podID="2d027894-c896-482a-8524-0c8221089a45" containerID="48243ead7e40de168ffee24abc3ec8fdf502853662f58d714219f7837fe51156" exitCode=0 Jan 07 03:43:27 crc kubenswrapper[4980]: I0107 03:43:27.462981 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" event={"ID":"2d027894-c896-482a-8524-0c8221089a45","Type":"ContainerDied","Data":"48243ead7e40de168ffee24abc3ec8fdf502853662f58d714219f7837fe51156"} Jan 07 03:43:27 crc kubenswrapper[4980]: I0107 03:43:27.463025 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" event={"ID":"2d027894-c896-482a-8524-0c8221089a45","Type":"ContainerStarted","Data":"39679e81ac5417a26eabdb7a611b418d5ca817eb29eed44a73d90c1d422ed07a"} Jan 07 03:43:27 crc kubenswrapper[4980]: I0107 03:43:27.741467 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c962a95-c8ed-4d65-810e-1da967416c06" path="/var/lib/kubelet/pods/6c962a95-c8ed-4d65-810e-1da967416c06/volumes" Jan 07 03:43:28 crc kubenswrapper[4980]: I0107 03:43:28.484122 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" event={"ID":"2d027894-c896-482a-8524-0c8221089a45","Type":"ContainerStarted","Data":"58954c336afbde2a1c3c9e73c1df93d4797a287ba7178bfbcc20e763934e7806"} Jan 07 03:43:28 crc kubenswrapper[4980]: I0107 03:43:28.484661 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" event={"ID":"2d027894-c896-482a-8524-0c8221089a45","Type":"ContainerStarted","Data":"ac982b9427fcd7d1b53b9c845549ee4d7ef55a8a465d6761774841156d0695da"} Jan 07 03:43:28 crc kubenswrapper[4980]: I0107 03:43:28.484690 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" event={"ID":"2d027894-c896-482a-8524-0c8221089a45","Type":"ContainerStarted","Data":"5d9a556091b93f3c2632c9faf712376938467013299295777396f25cdbc2abf0"} Jan 07 03:43:28 crc kubenswrapper[4980]: I0107 03:43:28.484712 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" event={"ID":"2d027894-c896-482a-8524-0c8221089a45","Type":"ContainerStarted","Data":"b6b930b3c6d2ef7e96829ab4adfc9db5af82ab7e99833ed09e857e5be9f1104a"} Jan 07 03:43:28 crc kubenswrapper[4980]: I0107 03:43:28.484750 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" event={"ID":"2d027894-c896-482a-8524-0c8221089a45","Type":"ContainerStarted","Data":"259f770bd720fc49540f91082454d398313fe9ce9bc379448ddca3cc4c92484c"} Jan 07 03:43:28 crc kubenswrapper[4980]: I0107 03:43:28.484771 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" event={"ID":"2d027894-c896-482a-8524-0c8221089a45","Type":"ContainerStarted","Data":"daa30e3fe148493f29765380150876ad766b896628a2fc8e8fcac5723caced45"} Jan 07 03:43:31 crc kubenswrapper[4980]: I0107 03:43:31.515086 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" event={"ID":"2d027894-c896-482a-8524-0c8221089a45","Type":"ContainerStarted","Data":"aca06b9582ef84aa5fc54c978e44644fe0996046b734626e881cf73c66041f5a"} Jan 07 03:43:33 crc kubenswrapper[4980]: I0107 03:43:33.532753 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" event={"ID":"2d027894-c896-482a-8524-0c8221089a45","Type":"ContainerStarted","Data":"df251df9cb517f57acac8772fb99165fd1a59d26bddcac1109a6cb1e167d4a46"} Jan 07 03:43:33 crc kubenswrapper[4980]: I0107 03:43:33.533321 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:33 crc kubenswrapper[4980]: I0107 03:43:33.533337 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:33 crc kubenswrapper[4980]: I0107 03:43:33.533350 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:33 crc kubenswrapper[4980]: I0107 03:43:33.609529 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:33 crc kubenswrapper[4980]: I0107 03:43:33.614067 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:33 crc kubenswrapper[4980]: I0107 03:43:33.616594 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" podStartSLOduration=7.616570139 podStartE2EDuration="7.616570139s" podCreationTimestamp="2026-01-07 03:43:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:43:33.609501083 +0000 UTC m=+660.175195818" watchObservedRunningTime="2026-01-07 03:43:33.616570139 +0000 UTC m=+660.182264874" Jan 07 03:43:34 crc kubenswrapper[4980]: I0107 03:43:34.302671 4980 scope.go:117] "RemoveContainer" containerID="16c5f8b24cd9dff6094a430c5eb226259115e22ab47ce38eeabdc1d675a5c643" Jan 07 03:43:35 crc kubenswrapper[4980]: I0107 03:43:35.550705 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9ct5r_3b3e552e-9608-4577-86c3-5f7573ef22f6/kube-multus/2.log" Jan 07 03:43:41 crc kubenswrapper[4980]: I0107 03:43:41.735968 4980 scope.go:117] "RemoveContainer" containerID="009f39239d8a76a13491308f9e197bfa3b38115c0fa817eb2a9167194b0bb5a3" Jan 07 03:43:41 crc kubenswrapper[4980]: E0107 03:43:41.736718 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9ct5r_openshift-multus(3b3e552e-9608-4577-86c3-5f7573ef22f6)\"" pod="openshift-multus/multus-9ct5r" podUID="3b3e552e-9608-4577-86c3-5f7573ef22f6" Jan 07 03:43:56 crc kubenswrapper[4980]: I0107 03:43:56.601051 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7vdkm" Jan 07 03:43:56 crc kubenswrapper[4980]: I0107 03:43:56.736894 4980 scope.go:117] "RemoveContainer" containerID="009f39239d8a76a13491308f9e197bfa3b38115c0fa817eb2a9167194b0bb5a3" Jan 07 03:43:57 crc kubenswrapper[4980]: I0107 03:43:57.702676 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9ct5r_3b3e552e-9608-4577-86c3-5f7573ef22f6/kube-multus/2.log" Jan 07 03:43:57 crc kubenswrapper[4980]: I0107 03:43:57.703261 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9ct5r" event={"ID":"3b3e552e-9608-4577-86c3-5f7573ef22f6","Type":"ContainerStarted","Data":"fe04ba3c795c5e65eab8d89a2a561376078f1117ddd0063ccc5a2d43c93db2c7"} Jan 07 03:44:06 crc kubenswrapper[4980]: I0107 03:44:06.542991 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:44:06 crc kubenswrapper[4980]: I0107 03:44:06.544775 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:44:08 crc kubenswrapper[4980]: I0107 03:44:08.275948 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l"] Jan 07 03:44:08 crc kubenswrapper[4980]: I0107 03:44:08.277327 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l" Jan 07 03:44:08 crc kubenswrapper[4980]: I0107 03:44:08.280027 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 07 03:44:08 crc kubenswrapper[4980]: I0107 03:44:08.289215 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l"] Jan 07 03:44:08 crc kubenswrapper[4980]: I0107 03:44:08.458031 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48de5406-371a-47e2-90d8-d6fd88506301-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l\" (UID: \"48de5406-371a-47e2-90d8-d6fd88506301\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l" Jan 07 03:44:08 crc kubenswrapper[4980]: I0107 03:44:08.458398 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48de5406-371a-47e2-90d8-d6fd88506301-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l\" (UID: \"48de5406-371a-47e2-90d8-d6fd88506301\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l" Jan 07 03:44:08 crc kubenswrapper[4980]: I0107 03:44:08.458674 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btb26\" (UniqueName: \"kubernetes.io/projected/48de5406-371a-47e2-90d8-d6fd88506301-kube-api-access-btb26\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l\" (UID: \"48de5406-371a-47e2-90d8-d6fd88506301\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l" Jan 07 03:44:08 crc kubenswrapper[4980]: I0107 03:44:08.560202 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48de5406-371a-47e2-90d8-d6fd88506301-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l\" (UID: \"48de5406-371a-47e2-90d8-d6fd88506301\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l" Jan 07 03:44:08 crc kubenswrapper[4980]: I0107 03:44:08.560298 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btb26\" (UniqueName: \"kubernetes.io/projected/48de5406-371a-47e2-90d8-d6fd88506301-kube-api-access-btb26\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l\" (UID: \"48de5406-371a-47e2-90d8-d6fd88506301\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l" Jan 07 03:44:08 crc kubenswrapper[4980]: I0107 03:44:08.560427 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48de5406-371a-47e2-90d8-d6fd88506301-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l\" (UID: \"48de5406-371a-47e2-90d8-d6fd88506301\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l" Jan 07 03:44:08 crc kubenswrapper[4980]: I0107 03:44:08.562157 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48de5406-371a-47e2-90d8-d6fd88506301-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l\" (UID: \"48de5406-371a-47e2-90d8-d6fd88506301\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l" Jan 07 03:44:08 crc kubenswrapper[4980]: I0107 03:44:08.562892 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48de5406-371a-47e2-90d8-d6fd88506301-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l\" (UID: \"48de5406-371a-47e2-90d8-d6fd88506301\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l" Jan 07 03:44:08 crc kubenswrapper[4980]: I0107 03:44:08.598075 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btb26\" (UniqueName: \"kubernetes.io/projected/48de5406-371a-47e2-90d8-d6fd88506301-kube-api-access-btb26\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l\" (UID: \"48de5406-371a-47e2-90d8-d6fd88506301\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l" Jan 07 03:44:08 crc kubenswrapper[4980]: I0107 03:44:08.599059 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l" Jan 07 03:44:08 crc kubenswrapper[4980]: I0107 03:44:08.911914 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l"] Jan 07 03:44:08 crc kubenswrapper[4980]: W0107 03:44:08.918970 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48de5406_371a_47e2_90d8_d6fd88506301.slice/crio-b69051646b2f4875515844fdff43f09868626949222164bb2c1bf4a70a696314 WatchSource:0}: Error finding container b69051646b2f4875515844fdff43f09868626949222164bb2c1bf4a70a696314: Status 404 returned error can't find the container with id b69051646b2f4875515844fdff43f09868626949222164bb2c1bf4a70a696314 Jan 07 03:44:09 crc kubenswrapper[4980]: I0107 03:44:09.793169 4980 generic.go:334] "Generic (PLEG): container finished" podID="48de5406-371a-47e2-90d8-d6fd88506301" containerID="b9ad88df6e4fcb549e2022b1c121bfaeadf8e82dc0bf0c7d3e5e36d7cfe7ab58" exitCode=0 Jan 07 03:44:09 crc kubenswrapper[4980]: I0107 03:44:09.793586 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l" event={"ID":"48de5406-371a-47e2-90d8-d6fd88506301","Type":"ContainerDied","Data":"b9ad88df6e4fcb549e2022b1c121bfaeadf8e82dc0bf0c7d3e5e36d7cfe7ab58"} Jan 07 03:44:09 crc kubenswrapper[4980]: I0107 03:44:09.793626 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l" event={"ID":"48de5406-371a-47e2-90d8-d6fd88506301","Type":"ContainerStarted","Data":"b69051646b2f4875515844fdff43f09868626949222164bb2c1bf4a70a696314"} Jan 07 03:44:14 crc kubenswrapper[4980]: I0107 03:44:14.832151 4980 generic.go:334] "Generic (PLEG): container finished" podID="48de5406-371a-47e2-90d8-d6fd88506301" containerID="7edcd330a37c4b54f21f2ffccee29c5a0820800c9c21ba32ec8edab9092df9c8" exitCode=0 Jan 07 03:44:14 crc kubenswrapper[4980]: I0107 03:44:14.832304 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l" event={"ID":"48de5406-371a-47e2-90d8-d6fd88506301","Type":"ContainerDied","Data":"7edcd330a37c4b54f21f2ffccee29c5a0820800c9c21ba32ec8edab9092df9c8"} Jan 07 03:44:16 crc kubenswrapper[4980]: I0107 03:44:16.851194 4980 generic.go:334] "Generic (PLEG): container finished" podID="48de5406-371a-47e2-90d8-d6fd88506301" containerID="daca478eefee6f5d61e72cba8bac2bde095ef0c20a547f4db00f5d07762bcd7e" exitCode=0 Jan 07 03:44:16 crc kubenswrapper[4980]: I0107 03:44:16.851258 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l" event={"ID":"48de5406-371a-47e2-90d8-d6fd88506301","Type":"ContainerDied","Data":"daca478eefee6f5d61e72cba8bac2bde095ef0c20a547f4db00f5d07762bcd7e"} Jan 07 03:44:18 crc kubenswrapper[4980]: I0107 03:44:18.202934 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l" Jan 07 03:44:18 crc kubenswrapper[4980]: I0107 03:44:18.307570 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btb26\" (UniqueName: \"kubernetes.io/projected/48de5406-371a-47e2-90d8-d6fd88506301-kube-api-access-btb26\") pod \"48de5406-371a-47e2-90d8-d6fd88506301\" (UID: \"48de5406-371a-47e2-90d8-d6fd88506301\") " Jan 07 03:44:18 crc kubenswrapper[4980]: I0107 03:44:18.307739 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48de5406-371a-47e2-90d8-d6fd88506301-bundle\") pod \"48de5406-371a-47e2-90d8-d6fd88506301\" (UID: \"48de5406-371a-47e2-90d8-d6fd88506301\") " Jan 07 03:44:18 crc kubenswrapper[4980]: I0107 03:44:18.307805 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48de5406-371a-47e2-90d8-d6fd88506301-util\") pod \"48de5406-371a-47e2-90d8-d6fd88506301\" (UID: \"48de5406-371a-47e2-90d8-d6fd88506301\") " Jan 07 03:44:18 crc kubenswrapper[4980]: I0107 03:44:18.308664 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48de5406-371a-47e2-90d8-d6fd88506301-bundle" (OuterVolumeSpecName: "bundle") pod "48de5406-371a-47e2-90d8-d6fd88506301" (UID: "48de5406-371a-47e2-90d8-d6fd88506301"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:44:18 crc kubenswrapper[4980]: I0107 03:44:18.317383 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48de5406-371a-47e2-90d8-d6fd88506301-util" (OuterVolumeSpecName: "util") pod "48de5406-371a-47e2-90d8-d6fd88506301" (UID: "48de5406-371a-47e2-90d8-d6fd88506301"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:44:18 crc kubenswrapper[4980]: I0107 03:44:18.317823 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48de5406-371a-47e2-90d8-d6fd88506301-kube-api-access-btb26" (OuterVolumeSpecName: "kube-api-access-btb26") pod "48de5406-371a-47e2-90d8-d6fd88506301" (UID: "48de5406-371a-47e2-90d8-d6fd88506301"). InnerVolumeSpecName "kube-api-access-btb26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:44:18 crc kubenswrapper[4980]: I0107 03:44:18.409712 4980 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48de5406-371a-47e2-90d8-d6fd88506301-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:44:18 crc kubenswrapper[4980]: I0107 03:44:18.409799 4980 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48de5406-371a-47e2-90d8-d6fd88506301-util\") on node \"crc\" DevicePath \"\"" Jan 07 03:44:18 crc kubenswrapper[4980]: I0107 03:44:18.409827 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btb26\" (UniqueName: \"kubernetes.io/projected/48de5406-371a-47e2-90d8-d6fd88506301-kube-api-access-btb26\") on node \"crc\" DevicePath \"\"" Jan 07 03:44:18 crc kubenswrapper[4980]: I0107 03:44:18.869331 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l" event={"ID":"48de5406-371a-47e2-90d8-d6fd88506301","Type":"ContainerDied","Data":"b69051646b2f4875515844fdff43f09868626949222164bb2c1bf4a70a696314"} Jan 07 03:44:18 crc kubenswrapper[4980]: I0107 03:44:18.869402 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69051646b2f4875515844fdff43f09868626949222164bb2c1bf4a70a696314" Jan 07 03:44:18 crc kubenswrapper[4980]: I0107 03:44:18.869467 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l" Jan 07 03:44:24 crc kubenswrapper[4980]: I0107 03:44:24.899138 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-jjfdp"] Jan 07 03:44:24 crc kubenswrapper[4980]: E0107 03:44:24.899705 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48de5406-371a-47e2-90d8-d6fd88506301" containerName="util" Jan 07 03:44:24 crc kubenswrapper[4980]: I0107 03:44:24.899721 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="48de5406-371a-47e2-90d8-d6fd88506301" containerName="util" Jan 07 03:44:24 crc kubenswrapper[4980]: E0107 03:44:24.899734 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48de5406-371a-47e2-90d8-d6fd88506301" containerName="pull" Jan 07 03:44:24 crc kubenswrapper[4980]: I0107 03:44:24.899742 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="48de5406-371a-47e2-90d8-d6fd88506301" containerName="pull" Jan 07 03:44:24 crc kubenswrapper[4980]: E0107 03:44:24.899754 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48de5406-371a-47e2-90d8-d6fd88506301" containerName="extract" Jan 07 03:44:24 crc kubenswrapper[4980]: I0107 03:44:24.899764 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="48de5406-371a-47e2-90d8-d6fd88506301" containerName="extract" Jan 07 03:44:24 crc kubenswrapper[4980]: I0107 03:44:24.899902 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="48de5406-371a-47e2-90d8-d6fd88506301" containerName="extract" Jan 07 03:44:24 crc kubenswrapper[4980]: I0107 03:44:24.900377 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-jjfdp" Jan 07 03:44:24 crc kubenswrapper[4980]: I0107 03:44:24.903718 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-kgs9h" Jan 07 03:44:24 crc kubenswrapper[4980]: I0107 03:44:24.904131 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 07 03:44:24 crc kubenswrapper[4980]: I0107 03:44:24.905983 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 07 03:44:24 crc kubenswrapper[4980]: I0107 03:44:24.914102 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-jjfdp"] Jan 07 03:44:25 crc kubenswrapper[4980]: I0107 03:44:25.007613 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86hlk\" (UniqueName: \"kubernetes.io/projected/b6737cd2-b163-4a8a-a674-54ba3a715f91-kube-api-access-86hlk\") pod \"nmstate-operator-6769fb99d-jjfdp\" (UID: \"b6737cd2-b163-4a8a-a674-54ba3a715f91\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-jjfdp" Jan 07 03:44:25 crc kubenswrapper[4980]: I0107 03:44:25.109440 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86hlk\" (UniqueName: \"kubernetes.io/projected/b6737cd2-b163-4a8a-a674-54ba3a715f91-kube-api-access-86hlk\") pod \"nmstate-operator-6769fb99d-jjfdp\" (UID: \"b6737cd2-b163-4a8a-a674-54ba3a715f91\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-jjfdp" Jan 07 03:44:25 crc kubenswrapper[4980]: I0107 03:44:25.141227 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86hlk\" (UniqueName: \"kubernetes.io/projected/b6737cd2-b163-4a8a-a674-54ba3a715f91-kube-api-access-86hlk\") pod \"nmstate-operator-6769fb99d-jjfdp\" (UID: \"b6737cd2-b163-4a8a-a674-54ba3a715f91\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-jjfdp" Jan 07 03:44:25 crc kubenswrapper[4980]: I0107 03:44:25.214506 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-jjfdp" Jan 07 03:44:25 crc kubenswrapper[4980]: I0107 03:44:25.547531 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-jjfdp"] Jan 07 03:44:25 crc kubenswrapper[4980]: W0107 03:44:25.554277 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6737cd2_b163_4a8a_a674_54ba3a715f91.slice/crio-d123e81b9c15166ddd84b285230695330b05a6bf8dc2df1d9f5ca45d1731abb1 WatchSource:0}: Error finding container d123e81b9c15166ddd84b285230695330b05a6bf8dc2df1d9f5ca45d1731abb1: Status 404 returned error can't find the container with id d123e81b9c15166ddd84b285230695330b05a6bf8dc2df1d9f5ca45d1731abb1 Jan 07 03:44:25 crc kubenswrapper[4980]: I0107 03:44:25.910674 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-jjfdp" event={"ID":"b6737cd2-b163-4a8a-a674-54ba3a715f91","Type":"ContainerStarted","Data":"d123e81b9c15166ddd84b285230695330b05a6bf8dc2df1d9f5ca45d1731abb1"} Jan 07 03:44:27 crc kubenswrapper[4980]: I0107 03:44:27.923279 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-jjfdp" event={"ID":"b6737cd2-b163-4a8a-a674-54ba3a715f91","Type":"ContainerStarted","Data":"1f814c53a6ea351afe7a99ec9c148e017f280c438fb4cb0a94d8b39fbfd08992"} Jan 07 03:44:27 crc kubenswrapper[4980]: I0107 03:44:27.944282 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-jjfdp" podStartSLOduration=1.922809055 podStartE2EDuration="3.944259718s" podCreationTimestamp="2026-01-07 03:44:24 +0000 UTC" firstStartedPulling="2026-01-07 03:44:25.559332856 +0000 UTC m=+712.125027601" lastFinishedPulling="2026-01-07 03:44:27.580783489 +0000 UTC m=+714.146478264" observedRunningTime="2026-01-07 03:44:27.941493524 +0000 UTC m=+714.507188269" watchObservedRunningTime="2026-01-07 03:44:27.944259718 +0000 UTC m=+714.509954453" Jan 07 03:44:28 crc kubenswrapper[4980]: I0107 03:44:28.967443 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-gd4b8"] Jan 07 03:44:28 crc kubenswrapper[4980]: I0107 03:44:28.970830 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-gd4b8" Jan 07 03:44:28 crc kubenswrapper[4980]: I0107 03:44:28.973763 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-m4v4q" Jan 07 03:44:28 crc kubenswrapper[4980]: I0107 03:44:28.986235 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-mspkv"] Jan 07 03:44:28 crc kubenswrapper[4980]: I0107 03:44:28.987303 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mspkv" Jan 07 03:44:28 crc kubenswrapper[4980]: I0107 03:44:28.993611 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 07 03:44:28 crc kubenswrapper[4980]: I0107 03:44:28.994771 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-gd4b8"] Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.019925 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-fc77g"] Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.021126 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fc77g" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.027216 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-mspkv"] Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.064529 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rn6m\" (UniqueName: \"kubernetes.io/projected/550a6236-4f98-4b9a-ad9d-bce2a985a853-kube-api-access-4rn6m\") pod \"nmstate-metrics-7f7f7578db-gd4b8\" (UID: \"550a6236-4f98-4b9a-ad9d-bce2a985a853\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-gd4b8" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.132737 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-rc5d6"] Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.133644 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rc5d6" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.135374 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-vp5d6" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.135654 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.136972 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.141816 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-rc5d6"] Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.165331 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c5c3ce37-b71d-4353-b725-a82d5aeb2f81-nmstate-lock\") pod \"nmstate-handler-fc77g\" (UID: \"c5c3ce37-b71d-4353-b725-a82d5aeb2f81\") " pod="openshift-nmstate/nmstate-handler-fc77g" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.165379 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfsc5\" (UniqueName: \"kubernetes.io/projected/c5c3ce37-b71d-4353-b725-a82d5aeb2f81-kube-api-access-nfsc5\") pod \"nmstate-handler-fc77g\" (UID: \"c5c3ce37-b71d-4353-b725-a82d5aeb2f81\") " pod="openshift-nmstate/nmstate-handler-fc77g" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.165410 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c5c3ce37-b71d-4353-b725-a82d5aeb2f81-ovs-socket\") pod \"nmstate-handler-fc77g\" (UID: \"c5c3ce37-b71d-4353-b725-a82d5aeb2f81\") " pod="openshift-nmstate/nmstate-handler-fc77g" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.165446 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/63e6b18c-21c5-4d1d-85b9-0db97630b4b8-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-mspkv\" (UID: \"63e6b18c-21c5-4d1d-85b9-0db97630b4b8\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-mspkv" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.165468 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c5c3ce37-b71d-4353-b725-a82d5aeb2f81-dbus-socket\") pod \"nmstate-handler-fc77g\" (UID: \"c5c3ce37-b71d-4353-b725-a82d5aeb2f81\") " pod="openshift-nmstate/nmstate-handler-fc77g" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.165511 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmnfb\" (UniqueName: \"kubernetes.io/projected/63e6b18c-21c5-4d1d-85b9-0db97630b4b8-kube-api-access-gmnfb\") pod \"nmstate-webhook-f8fb84555-mspkv\" (UID: \"63e6b18c-21c5-4d1d-85b9-0db97630b4b8\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-mspkv" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.165543 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rn6m\" (UniqueName: \"kubernetes.io/projected/550a6236-4f98-4b9a-ad9d-bce2a985a853-kube-api-access-4rn6m\") pod \"nmstate-metrics-7f7f7578db-gd4b8\" (UID: \"550a6236-4f98-4b9a-ad9d-bce2a985a853\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-gd4b8" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.199884 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rn6m\" (UniqueName: \"kubernetes.io/projected/550a6236-4f98-4b9a-ad9d-bce2a985a853-kube-api-access-4rn6m\") pod \"nmstate-metrics-7f7f7578db-gd4b8\" (UID: \"550a6236-4f98-4b9a-ad9d-bce2a985a853\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-gd4b8" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.266568 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c5c3ce37-b71d-4353-b725-a82d5aeb2f81-nmstate-lock\") pod \"nmstate-handler-fc77g\" (UID: \"c5c3ce37-b71d-4353-b725-a82d5aeb2f81\") " pod="openshift-nmstate/nmstate-handler-fc77g" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.266645 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c5c3ce37-b71d-4353-b725-a82d5aeb2f81-nmstate-lock\") pod \"nmstate-handler-fc77g\" (UID: \"c5c3ce37-b71d-4353-b725-a82d5aeb2f81\") " pod="openshift-nmstate/nmstate-handler-fc77g" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.266652 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfsc5\" (UniqueName: \"kubernetes.io/projected/c5c3ce37-b71d-4353-b725-a82d5aeb2f81-kube-api-access-nfsc5\") pod \"nmstate-handler-fc77g\" (UID: \"c5c3ce37-b71d-4353-b725-a82d5aeb2f81\") " pod="openshift-nmstate/nmstate-handler-fc77g" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.266778 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c5c3ce37-b71d-4353-b725-a82d5aeb2f81-ovs-socket\") pod \"nmstate-handler-fc77g\" (UID: \"c5c3ce37-b71d-4353-b725-a82d5aeb2f81\") " pod="openshift-nmstate/nmstate-handler-fc77g" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.266806 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk2t6\" (UniqueName: \"kubernetes.io/projected/680a6c39-957f-43ff-82e4-c70f626c14c6-kube-api-access-vk2t6\") pod \"nmstate-console-plugin-6ff7998486-rc5d6\" (UID: \"680a6c39-957f-43ff-82e4-c70f626c14c6\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rc5d6" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.266882 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/680a6c39-957f-43ff-82e4-c70f626c14c6-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-rc5d6\" (UID: \"680a6c39-957f-43ff-82e4-c70f626c14c6\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rc5d6" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.266890 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c5c3ce37-b71d-4353-b725-a82d5aeb2f81-ovs-socket\") pod \"nmstate-handler-fc77g\" (UID: \"c5c3ce37-b71d-4353-b725-a82d5aeb2f81\") " pod="openshift-nmstate/nmstate-handler-fc77g" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.267018 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/63e6b18c-21c5-4d1d-85b9-0db97630b4b8-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-mspkv\" (UID: \"63e6b18c-21c5-4d1d-85b9-0db97630b4b8\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-mspkv" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.267097 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c5c3ce37-b71d-4353-b725-a82d5aeb2f81-dbus-socket\") pod \"nmstate-handler-fc77g\" (UID: \"c5c3ce37-b71d-4353-b725-a82d5aeb2f81\") " pod="openshift-nmstate/nmstate-handler-fc77g" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.267191 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/680a6c39-957f-43ff-82e4-c70f626c14c6-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-rc5d6\" (UID: \"680a6c39-957f-43ff-82e4-c70f626c14c6\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rc5d6" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.267305 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmnfb\" (UniqueName: \"kubernetes.io/projected/63e6b18c-21c5-4d1d-85b9-0db97630b4b8-kube-api-access-gmnfb\") pod \"nmstate-webhook-f8fb84555-mspkv\" (UID: \"63e6b18c-21c5-4d1d-85b9-0db97630b4b8\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-mspkv" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.267436 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c5c3ce37-b71d-4353-b725-a82d5aeb2f81-dbus-socket\") pod \"nmstate-handler-fc77g\" (UID: \"c5c3ce37-b71d-4353-b725-a82d5aeb2f81\") " pod="openshift-nmstate/nmstate-handler-fc77g" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.286964 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-gd4b8" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.289242 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/63e6b18c-21c5-4d1d-85b9-0db97630b4b8-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-mspkv\" (UID: \"63e6b18c-21c5-4d1d-85b9-0db97630b4b8\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-mspkv" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.289823 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfsc5\" (UniqueName: \"kubernetes.io/projected/c5c3ce37-b71d-4353-b725-a82d5aeb2f81-kube-api-access-nfsc5\") pod \"nmstate-handler-fc77g\" (UID: \"c5c3ce37-b71d-4353-b725-a82d5aeb2f81\") " pod="openshift-nmstate/nmstate-handler-fc77g" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.299661 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmnfb\" (UniqueName: \"kubernetes.io/projected/63e6b18c-21c5-4d1d-85b9-0db97630b4b8-kube-api-access-gmnfb\") pod \"nmstate-webhook-f8fb84555-mspkv\" (UID: \"63e6b18c-21c5-4d1d-85b9-0db97630b4b8\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-mspkv" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.308456 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mspkv" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.319716 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5595cfd85-8tzhp"] Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.320526 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.341544 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fc77g" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.347598 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5595cfd85-8tzhp"] Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.368490 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/680a6c39-957f-43ff-82e4-c70f626c14c6-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-rc5d6\" (UID: \"680a6c39-957f-43ff-82e4-c70f626c14c6\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rc5d6" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.368584 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk2t6\" (UniqueName: \"kubernetes.io/projected/680a6c39-957f-43ff-82e4-c70f626c14c6-kube-api-access-vk2t6\") pod \"nmstate-console-plugin-6ff7998486-rc5d6\" (UID: \"680a6c39-957f-43ff-82e4-c70f626c14c6\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rc5d6" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.368616 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/680a6c39-957f-43ff-82e4-c70f626c14c6-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-rc5d6\" (UID: \"680a6c39-957f-43ff-82e4-c70f626c14c6\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rc5d6" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.369465 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/680a6c39-957f-43ff-82e4-c70f626c14c6-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-rc5d6\" (UID: \"680a6c39-957f-43ff-82e4-c70f626c14c6\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rc5d6" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.379492 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/680a6c39-957f-43ff-82e4-c70f626c14c6-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-rc5d6\" (UID: \"680a6c39-957f-43ff-82e4-c70f626c14c6\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rc5d6" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.386465 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk2t6\" (UniqueName: \"kubernetes.io/projected/680a6c39-957f-43ff-82e4-c70f626c14c6-kube-api-access-vk2t6\") pod \"nmstate-console-plugin-6ff7998486-rc5d6\" (UID: \"680a6c39-957f-43ff-82e4-c70f626c14c6\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rc5d6" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.458532 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rc5d6" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.470166 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8ae99226-05b4-41b1-9918-442adb01ff69-oauth-serving-cert\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.470252 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8ae99226-05b4-41b1-9918-442adb01ff69-console-oauth-config\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.470420 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdhf9\" (UniqueName: \"kubernetes.io/projected/8ae99226-05b4-41b1-9918-442adb01ff69-kube-api-access-xdhf9\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.470485 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ae99226-05b4-41b1-9918-442adb01ff69-trusted-ca-bundle\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.470567 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8ae99226-05b4-41b1-9918-442adb01ff69-console-serving-cert\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.470705 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8ae99226-05b4-41b1-9918-442adb01ff69-service-ca\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.470822 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8ae99226-05b4-41b1-9918-442adb01ff69-console-config\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.533133 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-gd4b8"] Jan 07 03:44:29 crc kubenswrapper[4980]: W0107 03:44:29.544277 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod550a6236_4f98_4b9a_ad9d_bce2a985a853.slice/crio-1b08134b72baa80b3afd1e0990879f13b8a21710213688fddd7f086f0c7b10e6 WatchSource:0}: Error finding container 1b08134b72baa80b3afd1e0990879f13b8a21710213688fddd7f086f0c7b10e6: Status 404 returned error can't find the container with id 1b08134b72baa80b3afd1e0990879f13b8a21710213688fddd7f086f0c7b10e6 Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.572087 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8ae99226-05b4-41b1-9918-442adb01ff69-console-config\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.572542 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8ae99226-05b4-41b1-9918-442adb01ff69-oauth-serving-cert\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.572632 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8ae99226-05b4-41b1-9918-442adb01ff69-console-oauth-config\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.572665 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdhf9\" (UniqueName: \"kubernetes.io/projected/8ae99226-05b4-41b1-9918-442adb01ff69-kube-api-access-xdhf9\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.572690 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ae99226-05b4-41b1-9918-442adb01ff69-trusted-ca-bundle\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.572717 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8ae99226-05b4-41b1-9918-442adb01ff69-console-serving-cert\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.572757 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8ae99226-05b4-41b1-9918-442adb01ff69-service-ca\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.574406 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8ae99226-05b4-41b1-9918-442adb01ff69-console-config\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.574426 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8ae99226-05b4-41b1-9918-442adb01ff69-oauth-serving-cert\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.575145 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8ae99226-05b4-41b1-9918-442adb01ff69-service-ca\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.575984 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ae99226-05b4-41b1-9918-442adb01ff69-trusted-ca-bundle\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.578216 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-mspkv"] Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.578700 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8ae99226-05b4-41b1-9918-442adb01ff69-console-serving-cert\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.581178 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8ae99226-05b4-41b1-9918-442adb01ff69-console-oauth-config\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: W0107 03:44:29.591733 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63e6b18c_21c5_4d1d_85b9_0db97630b4b8.slice/crio-53edcc751714d0fa960dbf072f8d80fd1ce38988ebb317f3f7d6c936bd0f15a0 WatchSource:0}: Error finding container 53edcc751714d0fa960dbf072f8d80fd1ce38988ebb317f3f7d6c936bd0f15a0: Status 404 returned error can't find the container with id 53edcc751714d0fa960dbf072f8d80fd1ce38988ebb317f3f7d6c936bd0f15a0 Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.593904 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdhf9\" (UniqueName: \"kubernetes.io/projected/8ae99226-05b4-41b1-9918-442adb01ff69-kube-api-access-xdhf9\") pod \"console-5595cfd85-8tzhp\" (UID: \"8ae99226-05b4-41b1-9918-442adb01ff69\") " pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.664895 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.684055 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-rc5d6"] Jan 07 03:44:29 crc kubenswrapper[4980]: W0107 03:44:29.687795 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod680a6c39_957f_43ff_82e4_c70f626c14c6.slice/crio-0ee1d78061ad2dba7ee59f3ff5791410de8d260330c4f36ad77fa1b8df4b50c5 WatchSource:0}: Error finding container 0ee1d78061ad2dba7ee59f3ff5791410de8d260330c4f36ad77fa1b8df4b50c5: Status 404 returned error can't find the container with id 0ee1d78061ad2dba7ee59f3ff5791410de8d260330c4f36ad77fa1b8df4b50c5 Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.900237 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5595cfd85-8tzhp"] Jan 07 03:44:29 crc kubenswrapper[4980]: W0107 03:44:29.907285 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ae99226_05b4_41b1_9918_442adb01ff69.slice/crio-aaadfbb29e539ee5493ca9f914ee30ed812e3398419947e9a077fa084c67b49e WatchSource:0}: Error finding container aaadfbb29e539ee5493ca9f914ee30ed812e3398419947e9a077fa084c67b49e: Status 404 returned error can't find the container with id aaadfbb29e539ee5493ca9f914ee30ed812e3398419947e9a077fa084c67b49e Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.938575 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5595cfd85-8tzhp" event={"ID":"8ae99226-05b4-41b1-9918-442adb01ff69","Type":"ContainerStarted","Data":"aaadfbb29e539ee5493ca9f914ee30ed812e3398419947e9a077fa084c67b49e"} Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.939735 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-gd4b8" event={"ID":"550a6236-4f98-4b9a-ad9d-bce2a985a853","Type":"ContainerStarted","Data":"1b08134b72baa80b3afd1e0990879f13b8a21710213688fddd7f086f0c7b10e6"} Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.941064 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fc77g" event={"ID":"c5c3ce37-b71d-4353-b725-a82d5aeb2f81","Type":"ContainerStarted","Data":"530f875e789a056aff172860bd4b786a8df39adae008c8421fc719a4bcd6fb91"} Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.943021 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mspkv" event={"ID":"63e6b18c-21c5-4d1d-85b9-0db97630b4b8","Type":"ContainerStarted","Data":"53edcc751714d0fa960dbf072f8d80fd1ce38988ebb317f3f7d6c936bd0f15a0"} Jan 07 03:44:29 crc kubenswrapper[4980]: I0107 03:44:29.963829 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rc5d6" event={"ID":"680a6c39-957f-43ff-82e4-c70f626c14c6","Type":"ContainerStarted","Data":"0ee1d78061ad2dba7ee59f3ff5791410de8d260330c4f36ad77fa1b8df4b50c5"} Jan 07 03:44:30 crc kubenswrapper[4980]: I0107 03:44:30.972150 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5595cfd85-8tzhp" event={"ID":"8ae99226-05b4-41b1-9918-442adb01ff69","Type":"ContainerStarted","Data":"21b030abe7b306ac64119cd2f796ec74dac2fb2efc5a77aacd68ee47f3a25133"} Jan 07 03:44:30 crc kubenswrapper[4980]: I0107 03:44:30.991971 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5595cfd85-8tzhp" podStartSLOduration=1.991935612 podStartE2EDuration="1.991935612s" podCreationTimestamp="2026-01-07 03:44:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:44:30.988087612 +0000 UTC m=+717.553782347" watchObservedRunningTime="2026-01-07 03:44:30.991935612 +0000 UTC m=+717.557630347" Jan 07 03:44:32 crc kubenswrapper[4980]: I0107 03:44:32.985710 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-gd4b8" event={"ID":"550a6236-4f98-4b9a-ad9d-bce2a985a853","Type":"ContainerStarted","Data":"715af1ccdd3bcbaf643fec7346cb1764745b9ba657da90f54a18e07afe18241a"} Jan 07 03:44:32 crc kubenswrapper[4980]: I0107 03:44:32.987399 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fc77g" event={"ID":"c5c3ce37-b71d-4353-b725-a82d5aeb2f81","Type":"ContainerStarted","Data":"abefe8ce612825af0f19544578fce8bdb09a36c16468d2cea2ce3bb2ba9b6589"} Jan 07 03:44:32 crc kubenswrapper[4980]: I0107 03:44:32.987516 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-fc77g" Jan 07 03:44:32 crc kubenswrapper[4980]: I0107 03:44:32.988597 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mspkv" event={"ID":"63e6b18c-21c5-4d1d-85b9-0db97630b4b8","Type":"ContainerStarted","Data":"1ad27753193156a2175e3da8925b9a70ecc54a5ad708ead16333321698c37b75"} Jan 07 03:44:32 crc kubenswrapper[4980]: I0107 03:44:32.988762 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mspkv" Jan 07 03:44:33 crc kubenswrapper[4980]: I0107 03:44:33.009196 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-fc77g" podStartSLOduration=2.3436965 podStartE2EDuration="5.009170553s" podCreationTimestamp="2026-01-07 03:44:28 +0000 UTC" firstStartedPulling="2026-01-07 03:44:29.373670127 +0000 UTC m=+715.939364862" lastFinishedPulling="2026-01-07 03:44:32.03914417 +0000 UTC m=+718.604838915" observedRunningTime="2026-01-07 03:44:33.000995649 +0000 UTC m=+719.566690384" watchObservedRunningTime="2026-01-07 03:44:33.009170553 +0000 UTC m=+719.574865288" Jan 07 03:44:33 crc kubenswrapper[4980]: I0107 03:44:33.018194 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mspkv" podStartSLOduration=2.543415213 podStartE2EDuration="5.01817098s" podCreationTimestamp="2026-01-07 03:44:28 +0000 UTC" firstStartedPulling="2026-01-07 03:44:29.593990348 +0000 UTC m=+716.159685083" lastFinishedPulling="2026-01-07 03:44:32.068746105 +0000 UTC m=+718.634440850" observedRunningTime="2026-01-07 03:44:33.012415142 +0000 UTC m=+719.578109877" watchObservedRunningTime="2026-01-07 03:44:33.01817098 +0000 UTC m=+719.583865725" Jan 07 03:44:33 crc kubenswrapper[4980]: I0107 03:44:33.997671 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rc5d6" event={"ID":"680a6c39-957f-43ff-82e4-c70f626c14c6","Type":"ContainerStarted","Data":"a4f42ea44df674097d4f79d06f3e622693c25adac05a68168b0c90cd17f1c1ca"} Jan 07 03:44:34 crc kubenswrapper[4980]: I0107 03:44:34.023560 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rc5d6" podStartSLOduration=1.386717628 podStartE2EDuration="5.023535055s" podCreationTimestamp="2026-01-07 03:44:29 +0000 UTC" firstStartedPulling="2026-01-07 03:44:29.692466223 +0000 UTC m=+716.258160968" lastFinishedPulling="2026-01-07 03:44:33.32928363 +0000 UTC m=+719.894978395" observedRunningTime="2026-01-07 03:44:34.021794622 +0000 UTC m=+720.587489387" watchObservedRunningTime="2026-01-07 03:44:34.023535055 +0000 UTC m=+720.589229790" Jan 07 03:44:36 crc kubenswrapper[4980]: I0107 03:44:36.012366 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-gd4b8" event={"ID":"550a6236-4f98-4b9a-ad9d-bce2a985a853","Type":"ContainerStarted","Data":"90572fd07de9e1ec561caf6a38dc89191d8d5fbf5cfe968c38a7c7067b4eac69"} Jan 07 03:44:36 crc kubenswrapper[4980]: I0107 03:44:36.041300 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-gd4b8" podStartSLOduration=2.5171351619999998 podStartE2EDuration="8.041271872s" podCreationTimestamp="2026-01-07 03:44:28 +0000 UTC" firstStartedPulling="2026-01-07 03:44:29.546841791 +0000 UTC m=+716.112536526" lastFinishedPulling="2026-01-07 03:44:35.070978501 +0000 UTC m=+721.636673236" observedRunningTime="2026-01-07 03:44:36.038009382 +0000 UTC m=+722.603704147" watchObservedRunningTime="2026-01-07 03:44:36.041271872 +0000 UTC m=+722.606966637" Jan 07 03:44:36 crc kubenswrapper[4980]: I0107 03:44:36.543590 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:44:36 crc kubenswrapper[4980]: I0107 03:44:36.543722 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:44:39 crc kubenswrapper[4980]: I0107 03:44:39.665128 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:39 crc kubenswrapper[4980]: I0107 03:44:39.672511 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:39 crc kubenswrapper[4980]: I0107 03:44:39.677935 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:39 crc kubenswrapper[4980]: I0107 03:44:39.701564 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-fc77g" Jan 07 03:44:40 crc kubenswrapper[4980]: I0107 03:44:40.049045 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5595cfd85-8tzhp" Jan 07 03:44:40 crc kubenswrapper[4980]: I0107 03:44:40.120908 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-46blp"] Jan 07 03:44:49 crc kubenswrapper[4980]: I0107 03:44:49.317132 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mspkv" Jan 07 03:45:00 crc kubenswrapper[4980]: I0107 03:45:00.182752 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw"] Jan 07 03:45:00 crc kubenswrapper[4980]: I0107 03:45:00.185856 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw" Jan 07 03:45:00 crc kubenswrapper[4980]: I0107 03:45:00.189942 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 07 03:45:00 crc kubenswrapper[4980]: I0107 03:45:00.190416 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 07 03:45:00 crc kubenswrapper[4980]: I0107 03:45:00.206237 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw"] Jan 07 03:45:00 crc kubenswrapper[4980]: I0107 03:45:00.350831 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/881208a4-6e38-4c65-8767-6c4d096c565a-secret-volume\") pod \"collect-profiles-29462625-4xdzw\" (UID: \"881208a4-6e38-4c65-8767-6c4d096c565a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw" Jan 07 03:45:00 crc kubenswrapper[4980]: I0107 03:45:00.350978 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/881208a4-6e38-4c65-8767-6c4d096c565a-config-volume\") pod \"collect-profiles-29462625-4xdzw\" (UID: \"881208a4-6e38-4c65-8767-6c4d096c565a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw" Jan 07 03:45:00 crc kubenswrapper[4980]: I0107 03:45:00.351071 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9tvg\" (UniqueName: \"kubernetes.io/projected/881208a4-6e38-4c65-8767-6c4d096c565a-kube-api-access-f9tvg\") pod \"collect-profiles-29462625-4xdzw\" (UID: \"881208a4-6e38-4c65-8767-6c4d096c565a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw" Jan 07 03:45:00 crc kubenswrapper[4980]: I0107 03:45:00.452353 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/881208a4-6e38-4c65-8767-6c4d096c565a-config-volume\") pod \"collect-profiles-29462625-4xdzw\" (UID: \"881208a4-6e38-4c65-8767-6c4d096c565a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw" Jan 07 03:45:00 crc kubenswrapper[4980]: I0107 03:45:00.452484 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9tvg\" (UniqueName: \"kubernetes.io/projected/881208a4-6e38-4c65-8767-6c4d096c565a-kube-api-access-f9tvg\") pod \"collect-profiles-29462625-4xdzw\" (UID: \"881208a4-6e38-4c65-8767-6c4d096c565a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw" Jan 07 03:45:00 crc kubenswrapper[4980]: I0107 03:45:00.452543 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/881208a4-6e38-4c65-8767-6c4d096c565a-secret-volume\") pod \"collect-profiles-29462625-4xdzw\" (UID: \"881208a4-6e38-4c65-8767-6c4d096c565a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw" Jan 07 03:45:00 crc kubenswrapper[4980]: I0107 03:45:00.454744 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/881208a4-6e38-4c65-8767-6c4d096c565a-config-volume\") pod \"collect-profiles-29462625-4xdzw\" (UID: \"881208a4-6e38-4c65-8767-6c4d096c565a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw" Jan 07 03:45:00 crc kubenswrapper[4980]: I0107 03:45:00.467024 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/881208a4-6e38-4c65-8767-6c4d096c565a-secret-volume\") pod \"collect-profiles-29462625-4xdzw\" (UID: \"881208a4-6e38-4c65-8767-6c4d096c565a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw" Jan 07 03:45:00 crc kubenswrapper[4980]: I0107 03:45:00.481977 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9tvg\" (UniqueName: \"kubernetes.io/projected/881208a4-6e38-4c65-8767-6c4d096c565a-kube-api-access-f9tvg\") pod \"collect-profiles-29462625-4xdzw\" (UID: \"881208a4-6e38-4c65-8767-6c4d096c565a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw" Jan 07 03:45:00 crc kubenswrapper[4980]: I0107 03:45:00.530237 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw" Jan 07 03:45:01 crc kubenswrapper[4980]: I0107 03:45:01.021543 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw"] Jan 07 03:45:01 crc kubenswrapper[4980]: I0107 03:45:01.240482 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw" event={"ID":"881208a4-6e38-4c65-8767-6c4d096c565a","Type":"ContainerStarted","Data":"06dff5a19673f7a6b3acfd8eeb119cbc41484384f8509c33ac8625d40b26251d"} Jan 07 03:45:01 crc kubenswrapper[4980]: I0107 03:45:01.241052 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw" event={"ID":"881208a4-6e38-4c65-8767-6c4d096c565a","Type":"ContainerStarted","Data":"66c47f16372bf2c04d5cd4a68160fc5afeea1403eb401a9c5fd05891fa34105d"} Jan 07 03:45:01 crc kubenswrapper[4980]: I0107 03:45:01.267416 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw" podStartSLOduration=1.267386522 podStartE2EDuration="1.267386522s" podCreationTimestamp="2026-01-07 03:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:45:01.263296535 +0000 UTC m=+747.828991280" watchObservedRunningTime="2026-01-07 03:45:01.267386522 +0000 UTC m=+747.833081297" Jan 07 03:45:02 crc kubenswrapper[4980]: I0107 03:45:02.248879 4980 generic.go:334] "Generic (PLEG): container finished" podID="881208a4-6e38-4c65-8767-6c4d096c565a" containerID="06dff5a19673f7a6b3acfd8eeb119cbc41484384f8509c33ac8625d40b26251d" exitCode=0 Jan 07 03:45:02 crc kubenswrapper[4980]: I0107 03:45:02.248931 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw" event={"ID":"881208a4-6e38-4c65-8767-6c4d096c565a","Type":"ContainerDied","Data":"06dff5a19673f7a6b3acfd8eeb119cbc41484384f8509c33ac8625d40b26251d"} Jan 07 03:45:03 crc kubenswrapper[4980]: I0107 03:45:03.571658 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw" Jan 07 03:45:03 crc kubenswrapper[4980]: I0107 03:45:03.710862 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/881208a4-6e38-4c65-8767-6c4d096c565a-secret-volume\") pod \"881208a4-6e38-4c65-8767-6c4d096c565a\" (UID: \"881208a4-6e38-4c65-8767-6c4d096c565a\") " Jan 07 03:45:03 crc kubenswrapper[4980]: I0107 03:45:03.710906 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9tvg\" (UniqueName: \"kubernetes.io/projected/881208a4-6e38-4c65-8767-6c4d096c565a-kube-api-access-f9tvg\") pod \"881208a4-6e38-4c65-8767-6c4d096c565a\" (UID: \"881208a4-6e38-4c65-8767-6c4d096c565a\") " Jan 07 03:45:03 crc kubenswrapper[4980]: I0107 03:45:03.711004 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/881208a4-6e38-4c65-8767-6c4d096c565a-config-volume\") pod \"881208a4-6e38-4c65-8767-6c4d096c565a\" (UID: \"881208a4-6e38-4c65-8767-6c4d096c565a\") " Jan 07 03:45:03 crc kubenswrapper[4980]: I0107 03:45:03.713431 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/881208a4-6e38-4c65-8767-6c4d096c565a-config-volume" (OuterVolumeSpecName: "config-volume") pod "881208a4-6e38-4c65-8767-6c4d096c565a" (UID: "881208a4-6e38-4c65-8767-6c4d096c565a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:45:03 crc kubenswrapper[4980]: I0107 03:45:03.719356 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/881208a4-6e38-4c65-8767-6c4d096c565a-kube-api-access-f9tvg" (OuterVolumeSpecName: "kube-api-access-f9tvg") pod "881208a4-6e38-4c65-8767-6c4d096c565a" (UID: "881208a4-6e38-4c65-8767-6c4d096c565a"). InnerVolumeSpecName "kube-api-access-f9tvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:45:03 crc kubenswrapper[4980]: I0107 03:45:03.719386 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/881208a4-6e38-4c65-8767-6c4d096c565a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "881208a4-6e38-4c65-8767-6c4d096c565a" (UID: "881208a4-6e38-4c65-8767-6c4d096c565a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:45:03 crc kubenswrapper[4980]: I0107 03:45:03.814650 4980 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/881208a4-6e38-4c65-8767-6c4d096c565a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:03 crc kubenswrapper[4980]: I0107 03:45:03.814692 4980 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/881208a4-6e38-4c65-8767-6c4d096c565a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:03 crc kubenswrapper[4980]: I0107 03:45:03.814711 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9tvg\" (UniqueName: \"kubernetes.io/projected/881208a4-6e38-4c65-8767-6c4d096c565a-kube-api-access-f9tvg\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:04 crc kubenswrapper[4980]: I0107 03:45:04.274148 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw" event={"ID":"881208a4-6e38-4c65-8767-6c4d096c565a","Type":"ContainerDied","Data":"66c47f16372bf2c04d5cd4a68160fc5afeea1403eb401a9c5fd05891fa34105d"} Jan 07 03:45:04 crc kubenswrapper[4980]: I0107 03:45:04.274607 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66c47f16372bf2c04d5cd4a68160fc5afeea1403eb401a9c5fd05891fa34105d" Jan 07 03:45:04 crc kubenswrapper[4980]: I0107 03:45:04.274185 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw" Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.173426 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-46blp" podUID="063cfd7b-7d93-45bc-a374-99b5e204b200" containerName="console" containerID="cri-o://bb4bdec15f5c99f4479399e58d76d103711321dcddae96dceb57b2971510e530" gracePeriod=15 Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.660094 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-46blp_063cfd7b-7d93-45bc-a374-99b5e204b200/console/0.log" Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.660479 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.748567 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsbft\" (UniqueName: \"kubernetes.io/projected/063cfd7b-7d93-45bc-a374-99b5e204b200-kube-api-access-xsbft\") pod \"063cfd7b-7d93-45bc-a374-99b5e204b200\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.748608 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/063cfd7b-7d93-45bc-a374-99b5e204b200-console-oauth-config\") pod \"063cfd7b-7d93-45bc-a374-99b5e204b200\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.748634 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-oauth-serving-cert\") pod \"063cfd7b-7d93-45bc-a374-99b5e204b200\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.748654 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-console-config\") pod \"063cfd7b-7d93-45bc-a374-99b5e204b200\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.748674 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/063cfd7b-7d93-45bc-a374-99b5e204b200-console-serving-cert\") pod \"063cfd7b-7d93-45bc-a374-99b5e204b200\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.748705 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-trusted-ca-bundle\") pod \"063cfd7b-7d93-45bc-a374-99b5e204b200\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.748742 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-service-ca\") pod \"063cfd7b-7d93-45bc-a374-99b5e204b200\" (UID: \"063cfd7b-7d93-45bc-a374-99b5e204b200\") " Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.751801 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-console-config" (OuterVolumeSpecName: "console-config") pod "063cfd7b-7d93-45bc-a374-99b5e204b200" (UID: "063cfd7b-7d93-45bc-a374-99b5e204b200"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.751865 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "063cfd7b-7d93-45bc-a374-99b5e204b200" (UID: "063cfd7b-7d93-45bc-a374-99b5e204b200"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.751881 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-service-ca" (OuterVolumeSpecName: "service-ca") pod "063cfd7b-7d93-45bc-a374-99b5e204b200" (UID: "063cfd7b-7d93-45bc-a374-99b5e204b200"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.751909 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "063cfd7b-7d93-45bc-a374-99b5e204b200" (UID: "063cfd7b-7d93-45bc-a374-99b5e204b200"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.756460 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063cfd7b-7d93-45bc-a374-99b5e204b200-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "063cfd7b-7d93-45bc-a374-99b5e204b200" (UID: "063cfd7b-7d93-45bc-a374-99b5e204b200"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.756469 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/063cfd7b-7d93-45bc-a374-99b5e204b200-kube-api-access-xsbft" (OuterVolumeSpecName: "kube-api-access-xsbft") pod "063cfd7b-7d93-45bc-a374-99b5e204b200" (UID: "063cfd7b-7d93-45bc-a374-99b5e204b200"). InnerVolumeSpecName "kube-api-access-xsbft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.756966 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063cfd7b-7d93-45bc-a374-99b5e204b200-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "063cfd7b-7d93-45bc-a374-99b5e204b200" (UID: "063cfd7b-7d93-45bc-a374-99b5e204b200"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.849920 4980 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-service-ca\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.849957 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsbft\" (UniqueName: \"kubernetes.io/projected/063cfd7b-7d93-45bc-a374-99b5e204b200-kube-api-access-xsbft\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.849971 4980 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/063cfd7b-7d93-45bc-a374-99b5e204b200-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.849982 4980 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.849996 4980 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-console-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.850660 4980 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/063cfd7b-7d93-45bc-a374-99b5e204b200-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:05 crc kubenswrapper[4980]: I0107 03:45:05.850684 4980 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/063cfd7b-7d93-45bc-a374-99b5e204b200-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.022929 4980 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.303974 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-46blp_063cfd7b-7d93-45bc-a374-99b5e204b200/console/0.log" Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.304051 4980 generic.go:334] "Generic (PLEG): container finished" podID="063cfd7b-7d93-45bc-a374-99b5e204b200" containerID="bb4bdec15f5c99f4479399e58d76d103711321dcddae96dceb57b2971510e530" exitCode=2 Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.304097 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-46blp" event={"ID":"063cfd7b-7d93-45bc-a374-99b5e204b200","Type":"ContainerDied","Data":"bb4bdec15f5c99f4479399e58d76d103711321dcddae96dceb57b2971510e530"} Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.304133 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-46blp" event={"ID":"063cfd7b-7d93-45bc-a374-99b5e204b200","Type":"ContainerDied","Data":"1618668fa804b7685f517c88d2b7f869a363a72299bb206f8e8958310471264a"} Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.304169 4980 scope.go:117] "RemoveContainer" containerID="bb4bdec15f5c99f4479399e58d76d103711321dcddae96dceb57b2971510e530" Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.304415 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-46blp" Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.339550 4980 scope.go:117] "RemoveContainer" containerID="bb4bdec15f5c99f4479399e58d76d103711321dcddae96dceb57b2971510e530" Jan 07 03:45:06 crc kubenswrapper[4980]: E0107 03:45:06.340367 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb4bdec15f5c99f4479399e58d76d103711321dcddae96dceb57b2971510e530\": container with ID starting with bb4bdec15f5c99f4479399e58d76d103711321dcddae96dceb57b2971510e530 not found: ID does not exist" containerID="bb4bdec15f5c99f4479399e58d76d103711321dcddae96dceb57b2971510e530" Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.340459 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb4bdec15f5c99f4479399e58d76d103711321dcddae96dceb57b2971510e530"} err="failed to get container status \"bb4bdec15f5c99f4479399e58d76d103711321dcddae96dceb57b2971510e530\": rpc error: code = NotFound desc = could not find container \"bb4bdec15f5c99f4479399e58d76d103711321dcddae96dceb57b2971510e530\": container with ID starting with bb4bdec15f5c99f4479399e58d76d103711321dcddae96dceb57b2971510e530 not found: ID does not exist" Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.348279 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-46blp"] Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.352973 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-46blp"] Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.543819 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.543909 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.543976 4980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.544981 4980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22f87e8413daf7843826baa261082b343285cfe845501a26b14ff6b1f2751cb0"} pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.545093 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" containerID="cri-o://22f87e8413daf7843826baa261082b343285cfe845501a26b14ff6b1f2751cb0" gracePeriod=600 Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.876009 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4d2m6"] Jan 07 03:45:06 crc kubenswrapper[4980]: E0107 03:45:06.877011 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881208a4-6e38-4c65-8767-6c4d096c565a" containerName="collect-profiles" Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.877043 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="881208a4-6e38-4c65-8767-6c4d096c565a" containerName="collect-profiles" Jan 07 03:45:06 crc kubenswrapper[4980]: E0107 03:45:06.877071 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063cfd7b-7d93-45bc-a374-99b5e204b200" containerName="console" Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.877085 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="063cfd7b-7d93-45bc-a374-99b5e204b200" containerName="console" Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.877273 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="881208a4-6e38-4c65-8767-6c4d096c565a" containerName="collect-profiles" Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.877307 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="063cfd7b-7d93-45bc-a374-99b5e204b200" containerName="console" Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.878376 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4d2m6" Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.899486 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4d2m6"] Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.968742 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc9gh\" (UniqueName: \"kubernetes.io/projected/abba6549-6787-4424-ac7f-f933e80765a7-kube-api-access-kc9gh\") pod \"redhat-marketplace-4d2m6\" (UID: \"abba6549-6787-4424-ac7f-f933e80765a7\") " pod="openshift-marketplace/redhat-marketplace-4d2m6" Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.968859 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abba6549-6787-4424-ac7f-f933e80765a7-utilities\") pod \"redhat-marketplace-4d2m6\" (UID: \"abba6549-6787-4424-ac7f-f933e80765a7\") " pod="openshift-marketplace/redhat-marketplace-4d2m6" Jan 07 03:45:06 crc kubenswrapper[4980]: I0107 03:45:06.968913 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abba6549-6787-4424-ac7f-f933e80765a7-catalog-content\") pod \"redhat-marketplace-4d2m6\" (UID: \"abba6549-6787-4424-ac7f-f933e80765a7\") " pod="openshift-marketplace/redhat-marketplace-4d2m6" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.070074 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc9gh\" (UniqueName: \"kubernetes.io/projected/abba6549-6787-4424-ac7f-f933e80765a7-kube-api-access-kc9gh\") pod \"redhat-marketplace-4d2m6\" (UID: \"abba6549-6787-4424-ac7f-f933e80765a7\") " pod="openshift-marketplace/redhat-marketplace-4d2m6" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.070208 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abba6549-6787-4424-ac7f-f933e80765a7-utilities\") pod \"redhat-marketplace-4d2m6\" (UID: \"abba6549-6787-4424-ac7f-f933e80765a7\") " pod="openshift-marketplace/redhat-marketplace-4d2m6" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.070255 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abba6549-6787-4424-ac7f-f933e80765a7-catalog-content\") pod \"redhat-marketplace-4d2m6\" (UID: \"abba6549-6787-4424-ac7f-f933e80765a7\") " pod="openshift-marketplace/redhat-marketplace-4d2m6" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.071222 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abba6549-6787-4424-ac7f-f933e80765a7-catalog-content\") pod \"redhat-marketplace-4d2m6\" (UID: \"abba6549-6787-4424-ac7f-f933e80765a7\") " pod="openshift-marketplace/redhat-marketplace-4d2m6" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.071269 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abba6549-6787-4424-ac7f-f933e80765a7-utilities\") pod \"redhat-marketplace-4d2m6\" (UID: \"abba6549-6787-4424-ac7f-f933e80765a7\") " pod="openshift-marketplace/redhat-marketplace-4d2m6" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.103251 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc9gh\" (UniqueName: \"kubernetes.io/projected/abba6549-6787-4424-ac7f-f933e80765a7-kube-api-access-kc9gh\") pod \"redhat-marketplace-4d2m6\" (UID: \"abba6549-6787-4424-ac7f-f933e80765a7\") " pod="openshift-marketplace/redhat-marketplace-4d2m6" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.206270 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4d2m6" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.332381 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerID="22f87e8413daf7843826baa261082b343285cfe845501a26b14ff6b1f2751cb0" exitCode=0 Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.332437 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerDied","Data":"22f87e8413daf7843826baa261082b343285cfe845501a26b14ff6b1f2751cb0"} Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.332463 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"2aeadb84272b7976dfcd584a184be5b65ae16f36aa28ca68277c09134c73d7e7"} Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.332480 4980 scope.go:117] "RemoveContainer" containerID="b42f21736ffa02dccd45145f74074847a6300123f32573dcab50b9332e94e700" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.442442 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4d2m6"] Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.503541 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l"] Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.505190 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.508650 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.524791 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l"] Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.575938 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sqjl\" (UniqueName: \"kubernetes.io/projected/8424289d-d257-486f-a82a-8d8cec374808-kube-api-access-6sqjl\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l\" (UID: \"8424289d-d257-486f-a82a-8d8cec374808\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.576014 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8424289d-d257-486f-a82a-8d8cec374808-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l\" (UID: \"8424289d-d257-486f-a82a-8d8cec374808\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.576066 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8424289d-d257-486f-a82a-8d8cec374808-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l\" (UID: \"8424289d-d257-486f-a82a-8d8cec374808\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.677184 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sqjl\" (UniqueName: \"kubernetes.io/projected/8424289d-d257-486f-a82a-8d8cec374808-kube-api-access-6sqjl\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l\" (UID: \"8424289d-d257-486f-a82a-8d8cec374808\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.677265 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8424289d-d257-486f-a82a-8d8cec374808-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l\" (UID: \"8424289d-d257-486f-a82a-8d8cec374808\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.677330 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8424289d-d257-486f-a82a-8d8cec374808-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l\" (UID: \"8424289d-d257-486f-a82a-8d8cec374808\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.677933 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8424289d-d257-486f-a82a-8d8cec374808-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l\" (UID: \"8424289d-d257-486f-a82a-8d8cec374808\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.677969 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8424289d-d257-486f-a82a-8d8cec374808-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l\" (UID: \"8424289d-d257-486f-a82a-8d8cec374808\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.698120 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sqjl\" (UniqueName: \"kubernetes.io/projected/8424289d-d257-486f-a82a-8d8cec374808-kube-api-access-6sqjl\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l\" (UID: \"8424289d-d257-486f-a82a-8d8cec374808\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.743509 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="063cfd7b-7d93-45bc-a374-99b5e204b200" path="/var/lib/kubelet/pods/063cfd7b-7d93-45bc-a374-99b5e204b200/volumes" Jan 07 03:45:07 crc kubenswrapper[4980]: I0107 03:45:07.824262 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" Jan 07 03:45:08 crc kubenswrapper[4980]: I0107 03:45:08.074473 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l"] Jan 07 03:45:08 crc kubenswrapper[4980]: I0107 03:45:08.351096 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" event={"ID":"8424289d-d257-486f-a82a-8d8cec374808","Type":"ContainerStarted","Data":"dfe3928bf26644b529ad64082f88e23e75f84a11a1a6934244c7ae0712c92ea8"} Jan 07 03:45:08 crc kubenswrapper[4980]: I0107 03:45:08.351164 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" event={"ID":"8424289d-d257-486f-a82a-8d8cec374808","Type":"ContainerStarted","Data":"59bdfd5878992b1505af8c4d2eaa45e07da7fc4c1391f35eafe4e32a68f1149a"} Jan 07 03:45:08 crc kubenswrapper[4980]: I0107 03:45:08.355669 4980 generic.go:334] "Generic (PLEG): container finished" podID="abba6549-6787-4424-ac7f-f933e80765a7" containerID="81d31a9eaa8699c4febd39fe58a56ba50c458fc69f57c32b591b01b4a76cdb50" exitCode=0 Jan 07 03:45:08 crc kubenswrapper[4980]: I0107 03:45:08.355717 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4d2m6" event={"ID":"abba6549-6787-4424-ac7f-f933e80765a7","Type":"ContainerDied","Data":"81d31a9eaa8699c4febd39fe58a56ba50c458fc69f57c32b591b01b4a76cdb50"} Jan 07 03:45:08 crc kubenswrapper[4980]: I0107 03:45:08.355746 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4d2m6" event={"ID":"abba6549-6787-4424-ac7f-f933e80765a7","Type":"ContainerStarted","Data":"a37504a1a6652f447d4c26680bf61afe5d81df8c5634be0ec84133d829ba47f3"} Jan 07 03:45:09 crc kubenswrapper[4980]: I0107 03:45:09.366832 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4d2m6" event={"ID":"abba6549-6787-4424-ac7f-f933e80765a7","Type":"ContainerStarted","Data":"a566f552f818500531199ebb14e85475d42e8fc3d7f66195c101ef693ceafc64"} Jan 07 03:45:09 crc kubenswrapper[4980]: I0107 03:45:09.369750 4980 generic.go:334] "Generic (PLEG): container finished" podID="8424289d-d257-486f-a82a-8d8cec374808" containerID="dfe3928bf26644b529ad64082f88e23e75f84a11a1a6934244c7ae0712c92ea8" exitCode=0 Jan 07 03:45:09 crc kubenswrapper[4980]: I0107 03:45:09.369827 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" event={"ID":"8424289d-d257-486f-a82a-8d8cec374808","Type":"ContainerDied","Data":"dfe3928bf26644b529ad64082f88e23e75f84a11a1a6934244c7ae0712c92ea8"} Jan 07 03:45:10 crc kubenswrapper[4980]: I0107 03:45:10.381150 4980 generic.go:334] "Generic (PLEG): container finished" podID="abba6549-6787-4424-ac7f-f933e80765a7" containerID="a566f552f818500531199ebb14e85475d42e8fc3d7f66195c101ef693ceafc64" exitCode=0 Jan 07 03:45:10 crc kubenswrapper[4980]: I0107 03:45:10.381220 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4d2m6" event={"ID":"abba6549-6787-4424-ac7f-f933e80765a7","Type":"ContainerDied","Data":"a566f552f818500531199ebb14e85475d42e8fc3d7f66195c101ef693ceafc64"} Jan 07 03:45:11 crc kubenswrapper[4980]: I0107 03:45:11.392614 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4d2m6" event={"ID":"abba6549-6787-4424-ac7f-f933e80765a7","Type":"ContainerStarted","Data":"2c583d818d721a0adef76e66d99a016ee543dcff1e7998d7faaef23a8a131622"} Jan 07 03:45:11 crc kubenswrapper[4980]: I0107 03:45:11.397056 4980 generic.go:334] "Generic (PLEG): container finished" podID="8424289d-d257-486f-a82a-8d8cec374808" containerID="af00664a8ac4f44579a50f470c8ae8839dcb560421c418cc2d446eae215a6ac7" exitCode=0 Jan 07 03:45:11 crc kubenswrapper[4980]: I0107 03:45:11.397113 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" event={"ID":"8424289d-d257-486f-a82a-8d8cec374808","Type":"ContainerDied","Data":"af00664a8ac4f44579a50f470c8ae8839dcb560421c418cc2d446eae215a6ac7"} Jan 07 03:45:11 crc kubenswrapper[4980]: I0107 03:45:11.422918 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4d2m6" podStartSLOduration=2.962411234 podStartE2EDuration="5.4228905s" podCreationTimestamp="2026-01-07 03:45:06 +0000 UTC" firstStartedPulling="2026-01-07 03:45:08.358959906 +0000 UTC m=+754.924654681" lastFinishedPulling="2026-01-07 03:45:10.819439182 +0000 UTC m=+757.385133947" observedRunningTime="2026-01-07 03:45:11.420218237 +0000 UTC m=+757.985913002" watchObservedRunningTime="2026-01-07 03:45:11.4228905 +0000 UTC m=+757.988585245" Jan 07 03:45:12 crc kubenswrapper[4980]: I0107 03:45:12.272702 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nsf7b"] Jan 07 03:45:12 crc kubenswrapper[4980]: I0107 03:45:12.275182 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nsf7b" Jan 07 03:45:12 crc kubenswrapper[4980]: I0107 03:45:12.293311 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nsf7b"] Jan 07 03:45:12 crc kubenswrapper[4980]: I0107 03:45:12.346167 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrkrf\" (UniqueName: \"kubernetes.io/projected/4bfc8152-71b0-4152-9a86-6acb6ed91e04-kube-api-access-rrkrf\") pod \"redhat-operators-nsf7b\" (UID: \"4bfc8152-71b0-4152-9a86-6acb6ed91e04\") " pod="openshift-marketplace/redhat-operators-nsf7b" Jan 07 03:45:12 crc kubenswrapper[4980]: I0107 03:45:12.346430 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bfc8152-71b0-4152-9a86-6acb6ed91e04-utilities\") pod \"redhat-operators-nsf7b\" (UID: \"4bfc8152-71b0-4152-9a86-6acb6ed91e04\") " pod="openshift-marketplace/redhat-operators-nsf7b" Jan 07 03:45:12 crc kubenswrapper[4980]: I0107 03:45:12.346525 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bfc8152-71b0-4152-9a86-6acb6ed91e04-catalog-content\") pod \"redhat-operators-nsf7b\" (UID: \"4bfc8152-71b0-4152-9a86-6acb6ed91e04\") " pod="openshift-marketplace/redhat-operators-nsf7b" Jan 07 03:45:12 crc kubenswrapper[4980]: I0107 03:45:12.410448 4980 generic.go:334] "Generic (PLEG): container finished" podID="8424289d-d257-486f-a82a-8d8cec374808" containerID="7fdf6a4e28ccb39059c527e8430a319230b497053a7dfef0866de40831067a45" exitCode=0 Jan 07 03:45:12 crc kubenswrapper[4980]: I0107 03:45:12.410515 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" event={"ID":"8424289d-d257-486f-a82a-8d8cec374808","Type":"ContainerDied","Data":"7fdf6a4e28ccb39059c527e8430a319230b497053a7dfef0866de40831067a45"} Jan 07 03:45:12 crc kubenswrapper[4980]: I0107 03:45:12.447458 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrkrf\" (UniqueName: \"kubernetes.io/projected/4bfc8152-71b0-4152-9a86-6acb6ed91e04-kube-api-access-rrkrf\") pod \"redhat-operators-nsf7b\" (UID: \"4bfc8152-71b0-4152-9a86-6acb6ed91e04\") " pod="openshift-marketplace/redhat-operators-nsf7b" Jan 07 03:45:12 crc kubenswrapper[4980]: I0107 03:45:12.447535 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bfc8152-71b0-4152-9a86-6acb6ed91e04-utilities\") pod \"redhat-operators-nsf7b\" (UID: \"4bfc8152-71b0-4152-9a86-6acb6ed91e04\") " pod="openshift-marketplace/redhat-operators-nsf7b" Jan 07 03:45:12 crc kubenswrapper[4980]: I0107 03:45:12.447626 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bfc8152-71b0-4152-9a86-6acb6ed91e04-catalog-content\") pod \"redhat-operators-nsf7b\" (UID: \"4bfc8152-71b0-4152-9a86-6acb6ed91e04\") " pod="openshift-marketplace/redhat-operators-nsf7b" Jan 07 03:45:12 crc kubenswrapper[4980]: I0107 03:45:12.448079 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bfc8152-71b0-4152-9a86-6acb6ed91e04-catalog-content\") pod \"redhat-operators-nsf7b\" (UID: \"4bfc8152-71b0-4152-9a86-6acb6ed91e04\") " pod="openshift-marketplace/redhat-operators-nsf7b" Jan 07 03:45:12 crc kubenswrapper[4980]: I0107 03:45:12.448674 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bfc8152-71b0-4152-9a86-6acb6ed91e04-utilities\") pod \"redhat-operators-nsf7b\" (UID: \"4bfc8152-71b0-4152-9a86-6acb6ed91e04\") " pod="openshift-marketplace/redhat-operators-nsf7b" Jan 07 03:45:12 crc kubenswrapper[4980]: I0107 03:45:12.476872 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrkrf\" (UniqueName: \"kubernetes.io/projected/4bfc8152-71b0-4152-9a86-6acb6ed91e04-kube-api-access-rrkrf\") pod \"redhat-operators-nsf7b\" (UID: \"4bfc8152-71b0-4152-9a86-6acb6ed91e04\") " pod="openshift-marketplace/redhat-operators-nsf7b" Jan 07 03:45:12 crc kubenswrapper[4980]: I0107 03:45:12.594131 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nsf7b" Jan 07 03:45:13 crc kubenswrapper[4980]: I0107 03:45:13.060039 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nsf7b"] Jan 07 03:45:13 crc kubenswrapper[4980]: I0107 03:45:13.418252 4980 generic.go:334] "Generic (PLEG): container finished" podID="4bfc8152-71b0-4152-9a86-6acb6ed91e04" containerID="865cfc093a1536186c2d9b6c6c3b6f33c233c7cc87782fe81f26fbd7c8ff191f" exitCode=0 Jan 07 03:45:13 crc kubenswrapper[4980]: I0107 03:45:13.418362 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nsf7b" event={"ID":"4bfc8152-71b0-4152-9a86-6acb6ed91e04","Type":"ContainerDied","Data":"865cfc093a1536186c2d9b6c6c3b6f33c233c7cc87782fe81f26fbd7c8ff191f"} Jan 07 03:45:13 crc kubenswrapper[4980]: I0107 03:45:13.418784 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nsf7b" event={"ID":"4bfc8152-71b0-4152-9a86-6acb6ed91e04","Type":"ContainerStarted","Data":"5a03c82f57571db4a2d9d289c34c3e15723ecca23fd2f4a074e95c2cf4291a0c"} Jan 07 03:45:13 crc kubenswrapper[4980]: I0107 03:45:13.711843 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" Jan 07 03:45:13 crc kubenswrapper[4980]: I0107 03:45:13.773725 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8424289d-d257-486f-a82a-8d8cec374808-util\") pod \"8424289d-d257-486f-a82a-8d8cec374808\" (UID: \"8424289d-d257-486f-a82a-8d8cec374808\") " Jan 07 03:45:13 crc kubenswrapper[4980]: I0107 03:45:13.773818 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sqjl\" (UniqueName: \"kubernetes.io/projected/8424289d-d257-486f-a82a-8d8cec374808-kube-api-access-6sqjl\") pod \"8424289d-d257-486f-a82a-8d8cec374808\" (UID: \"8424289d-d257-486f-a82a-8d8cec374808\") " Jan 07 03:45:13 crc kubenswrapper[4980]: I0107 03:45:13.773886 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8424289d-d257-486f-a82a-8d8cec374808-bundle\") pod \"8424289d-d257-486f-a82a-8d8cec374808\" (UID: \"8424289d-d257-486f-a82a-8d8cec374808\") " Jan 07 03:45:13 crc kubenswrapper[4980]: I0107 03:45:13.775011 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8424289d-d257-486f-a82a-8d8cec374808-bundle" (OuterVolumeSpecName: "bundle") pod "8424289d-d257-486f-a82a-8d8cec374808" (UID: "8424289d-d257-486f-a82a-8d8cec374808"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:45:13 crc kubenswrapper[4980]: I0107 03:45:13.794124 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8424289d-d257-486f-a82a-8d8cec374808-kube-api-access-6sqjl" (OuterVolumeSpecName: "kube-api-access-6sqjl") pod "8424289d-d257-486f-a82a-8d8cec374808" (UID: "8424289d-d257-486f-a82a-8d8cec374808"). InnerVolumeSpecName "kube-api-access-6sqjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:45:13 crc kubenswrapper[4980]: I0107 03:45:13.875745 4980 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8424289d-d257-486f-a82a-8d8cec374808-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:13 crc kubenswrapper[4980]: I0107 03:45:13.875801 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sqjl\" (UniqueName: \"kubernetes.io/projected/8424289d-d257-486f-a82a-8d8cec374808-kube-api-access-6sqjl\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:13 crc kubenswrapper[4980]: I0107 03:45:13.957001 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8424289d-d257-486f-a82a-8d8cec374808-util" (OuterVolumeSpecName: "util") pod "8424289d-d257-486f-a82a-8d8cec374808" (UID: "8424289d-d257-486f-a82a-8d8cec374808"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:45:13 crc kubenswrapper[4980]: I0107 03:45:13.977292 4980 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8424289d-d257-486f-a82a-8d8cec374808-util\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:14 crc kubenswrapper[4980]: I0107 03:45:14.429417 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" event={"ID":"8424289d-d257-486f-a82a-8d8cec374808","Type":"ContainerDied","Data":"59bdfd5878992b1505af8c4d2eaa45e07da7fc4c1391f35eafe4e32a68f1149a"} Jan 07 03:45:14 crc kubenswrapper[4980]: I0107 03:45:14.429481 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59bdfd5878992b1505af8c4d2eaa45e07da7fc4c1391f35eafe4e32a68f1149a" Jan 07 03:45:14 crc kubenswrapper[4980]: I0107 03:45:14.429546 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l" Jan 07 03:45:15 crc kubenswrapper[4980]: I0107 03:45:15.440874 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nsf7b" event={"ID":"4bfc8152-71b0-4152-9a86-6acb6ed91e04","Type":"ContainerStarted","Data":"6786af3613ea827b358171e78b942542effd485df5c016314859569576237712"} Jan 07 03:45:16 crc kubenswrapper[4980]: I0107 03:45:16.458097 4980 generic.go:334] "Generic (PLEG): container finished" podID="4bfc8152-71b0-4152-9a86-6acb6ed91e04" containerID="6786af3613ea827b358171e78b942542effd485df5c016314859569576237712" exitCode=0 Jan 07 03:45:16 crc kubenswrapper[4980]: I0107 03:45:16.458357 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nsf7b" event={"ID":"4bfc8152-71b0-4152-9a86-6acb6ed91e04","Type":"ContainerDied","Data":"6786af3613ea827b358171e78b942542effd485df5c016314859569576237712"} Jan 07 03:45:17 crc kubenswrapper[4980]: I0107 03:45:17.206668 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4d2m6" Jan 07 03:45:17 crc kubenswrapper[4980]: I0107 03:45:17.207459 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4d2m6" Jan 07 03:45:17 crc kubenswrapper[4980]: I0107 03:45:17.264087 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4d2m6" Jan 07 03:45:17 crc kubenswrapper[4980]: I0107 03:45:17.471444 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nsf7b" event={"ID":"4bfc8152-71b0-4152-9a86-6acb6ed91e04","Type":"ContainerStarted","Data":"bb2b5d03bb3d46ff1958500d2ec63d02b1ceebfede403ae3d13f58dd29b9d4fe"} Jan 07 03:45:17 crc kubenswrapper[4980]: I0107 03:45:17.499672 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nsf7b" podStartSLOduration=1.998315801 podStartE2EDuration="5.499646058s" podCreationTimestamp="2026-01-07 03:45:12 +0000 UTC" firstStartedPulling="2026-01-07 03:45:13.420703231 +0000 UTC m=+759.986398006" lastFinishedPulling="2026-01-07 03:45:16.922033488 +0000 UTC m=+763.487728263" observedRunningTime="2026-01-07 03:45:17.497629225 +0000 UTC m=+764.063324000" watchObservedRunningTime="2026-01-07 03:45:17.499646058 +0000 UTC m=+764.065340793" Jan 07 03:45:17 crc kubenswrapper[4980]: I0107 03:45:17.532599 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4d2m6" Jan 07 03:45:19 crc kubenswrapper[4980]: I0107 03:45:19.453962 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4d2m6"] Jan 07 03:45:19 crc kubenswrapper[4980]: I0107 03:45:19.480809 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4d2m6" podUID="abba6549-6787-4424-ac7f-f933e80765a7" containerName="registry-server" containerID="cri-o://2c583d818d721a0adef76e66d99a016ee543dcff1e7998d7faaef23a8a131622" gracePeriod=2 Jan 07 03:45:19 crc kubenswrapper[4980]: I0107 03:45:19.885030 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4d2m6" Jan 07 03:45:19 crc kubenswrapper[4980]: I0107 03:45:19.968168 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abba6549-6787-4424-ac7f-f933e80765a7-catalog-content\") pod \"abba6549-6787-4424-ac7f-f933e80765a7\" (UID: \"abba6549-6787-4424-ac7f-f933e80765a7\") " Jan 07 03:45:19 crc kubenswrapper[4980]: I0107 03:45:19.968249 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc9gh\" (UniqueName: \"kubernetes.io/projected/abba6549-6787-4424-ac7f-f933e80765a7-kube-api-access-kc9gh\") pod \"abba6549-6787-4424-ac7f-f933e80765a7\" (UID: \"abba6549-6787-4424-ac7f-f933e80765a7\") " Jan 07 03:45:19 crc kubenswrapper[4980]: I0107 03:45:19.968296 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abba6549-6787-4424-ac7f-f933e80765a7-utilities\") pod \"abba6549-6787-4424-ac7f-f933e80765a7\" (UID: \"abba6549-6787-4424-ac7f-f933e80765a7\") " Jan 07 03:45:19 crc kubenswrapper[4980]: I0107 03:45:19.969256 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abba6549-6787-4424-ac7f-f933e80765a7-utilities" (OuterVolumeSpecName: "utilities") pod "abba6549-6787-4424-ac7f-f933e80765a7" (UID: "abba6549-6787-4424-ac7f-f933e80765a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:45:19 crc kubenswrapper[4980]: I0107 03:45:19.974100 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abba6549-6787-4424-ac7f-f933e80765a7-kube-api-access-kc9gh" (OuterVolumeSpecName: "kube-api-access-kc9gh") pod "abba6549-6787-4424-ac7f-f933e80765a7" (UID: "abba6549-6787-4424-ac7f-f933e80765a7"). InnerVolumeSpecName "kube-api-access-kc9gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:45:20 crc kubenswrapper[4980]: I0107 03:45:20.069851 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc9gh\" (UniqueName: \"kubernetes.io/projected/abba6549-6787-4424-ac7f-f933e80765a7-kube-api-access-kc9gh\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:20 crc kubenswrapper[4980]: I0107 03:45:20.069890 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abba6549-6787-4424-ac7f-f933e80765a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:20 crc kubenswrapper[4980]: I0107 03:45:20.489863 4980 generic.go:334] "Generic (PLEG): container finished" podID="abba6549-6787-4424-ac7f-f933e80765a7" containerID="2c583d818d721a0adef76e66d99a016ee543dcff1e7998d7faaef23a8a131622" exitCode=0 Jan 07 03:45:20 crc kubenswrapper[4980]: I0107 03:45:20.489932 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4d2m6" event={"ID":"abba6549-6787-4424-ac7f-f933e80765a7","Type":"ContainerDied","Data":"2c583d818d721a0adef76e66d99a016ee543dcff1e7998d7faaef23a8a131622"} Jan 07 03:45:20 crc kubenswrapper[4980]: I0107 03:45:20.489986 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4d2m6" Jan 07 03:45:20 crc kubenswrapper[4980]: I0107 03:45:20.490029 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4d2m6" event={"ID":"abba6549-6787-4424-ac7f-f933e80765a7","Type":"ContainerDied","Data":"a37504a1a6652f447d4c26680bf61afe5d81df8c5634be0ec84133d829ba47f3"} Jan 07 03:45:20 crc kubenswrapper[4980]: I0107 03:45:20.490062 4980 scope.go:117] "RemoveContainer" containerID="2c583d818d721a0adef76e66d99a016ee543dcff1e7998d7faaef23a8a131622" Jan 07 03:45:20 crc kubenswrapper[4980]: I0107 03:45:20.518345 4980 scope.go:117] "RemoveContainer" containerID="a566f552f818500531199ebb14e85475d42e8fc3d7f66195c101ef693ceafc64" Jan 07 03:45:20 crc kubenswrapper[4980]: I0107 03:45:20.540236 4980 scope.go:117] "RemoveContainer" containerID="81d31a9eaa8699c4febd39fe58a56ba50c458fc69f57c32b591b01b4a76cdb50" Jan 07 03:45:20 crc kubenswrapper[4980]: I0107 03:45:20.566759 4980 scope.go:117] "RemoveContainer" containerID="2c583d818d721a0adef76e66d99a016ee543dcff1e7998d7faaef23a8a131622" Jan 07 03:45:20 crc kubenswrapper[4980]: E0107 03:45:20.567510 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c583d818d721a0adef76e66d99a016ee543dcff1e7998d7faaef23a8a131622\": container with ID starting with 2c583d818d721a0adef76e66d99a016ee543dcff1e7998d7faaef23a8a131622 not found: ID does not exist" containerID="2c583d818d721a0adef76e66d99a016ee543dcff1e7998d7faaef23a8a131622" Jan 07 03:45:20 crc kubenswrapper[4980]: I0107 03:45:20.567609 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c583d818d721a0adef76e66d99a016ee543dcff1e7998d7faaef23a8a131622"} err="failed to get container status \"2c583d818d721a0adef76e66d99a016ee543dcff1e7998d7faaef23a8a131622\": rpc error: code = NotFound desc = could not find container \"2c583d818d721a0adef76e66d99a016ee543dcff1e7998d7faaef23a8a131622\": container with ID starting with 2c583d818d721a0adef76e66d99a016ee543dcff1e7998d7faaef23a8a131622 not found: ID does not exist" Jan 07 03:45:20 crc kubenswrapper[4980]: I0107 03:45:20.567665 4980 scope.go:117] "RemoveContainer" containerID="a566f552f818500531199ebb14e85475d42e8fc3d7f66195c101ef693ceafc64" Jan 07 03:45:20 crc kubenswrapper[4980]: E0107 03:45:20.568144 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a566f552f818500531199ebb14e85475d42e8fc3d7f66195c101ef693ceafc64\": container with ID starting with a566f552f818500531199ebb14e85475d42e8fc3d7f66195c101ef693ceafc64 not found: ID does not exist" containerID="a566f552f818500531199ebb14e85475d42e8fc3d7f66195c101ef693ceafc64" Jan 07 03:45:20 crc kubenswrapper[4980]: I0107 03:45:20.568208 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a566f552f818500531199ebb14e85475d42e8fc3d7f66195c101ef693ceafc64"} err="failed to get container status \"a566f552f818500531199ebb14e85475d42e8fc3d7f66195c101ef693ceafc64\": rpc error: code = NotFound desc = could not find container \"a566f552f818500531199ebb14e85475d42e8fc3d7f66195c101ef693ceafc64\": container with ID starting with a566f552f818500531199ebb14e85475d42e8fc3d7f66195c101ef693ceafc64 not found: ID does not exist" Jan 07 03:45:20 crc kubenswrapper[4980]: I0107 03:45:20.568261 4980 scope.go:117] "RemoveContainer" containerID="81d31a9eaa8699c4febd39fe58a56ba50c458fc69f57c32b591b01b4a76cdb50" Jan 07 03:45:20 crc kubenswrapper[4980]: E0107 03:45:20.568938 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81d31a9eaa8699c4febd39fe58a56ba50c458fc69f57c32b591b01b4a76cdb50\": container with ID starting with 81d31a9eaa8699c4febd39fe58a56ba50c458fc69f57c32b591b01b4a76cdb50 not found: ID does not exist" containerID="81d31a9eaa8699c4febd39fe58a56ba50c458fc69f57c32b591b01b4a76cdb50" Jan 07 03:45:20 crc kubenswrapper[4980]: I0107 03:45:20.568981 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81d31a9eaa8699c4febd39fe58a56ba50c458fc69f57c32b591b01b4a76cdb50"} err="failed to get container status \"81d31a9eaa8699c4febd39fe58a56ba50c458fc69f57c32b591b01b4a76cdb50\": rpc error: code = NotFound desc = could not find container \"81d31a9eaa8699c4febd39fe58a56ba50c458fc69f57c32b591b01b4a76cdb50\": container with ID starting with 81d31a9eaa8699c4febd39fe58a56ba50c458fc69f57c32b591b01b4a76cdb50 not found: ID does not exist" Jan 07 03:45:20 crc kubenswrapper[4980]: I0107 03:45:20.827877 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abba6549-6787-4424-ac7f-f933e80765a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abba6549-6787-4424-ac7f-f933e80765a7" (UID: "abba6549-6787-4424-ac7f-f933e80765a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:45:20 crc kubenswrapper[4980]: I0107 03:45:20.881635 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abba6549-6787-4424-ac7f-f933e80765a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:21 crc kubenswrapper[4980]: I0107 03:45:21.137838 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4d2m6"] Jan 07 03:45:21 crc kubenswrapper[4980]: I0107 03:45:21.144104 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4d2m6"] Jan 07 03:45:21 crc kubenswrapper[4980]: I0107 03:45:21.744856 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abba6549-6787-4424-ac7f-f933e80765a7" path="/var/lib/kubelet/pods/abba6549-6787-4424-ac7f-f933e80765a7/volumes" Jan 07 03:45:22 crc kubenswrapper[4980]: I0107 03:45:22.595470 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nsf7b" Jan 07 03:45:22 crc kubenswrapper[4980]: I0107 03:45:22.597400 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nsf7b" Jan 07 03:45:23 crc kubenswrapper[4980]: I0107 03:45:23.639838 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nsf7b" podUID="4bfc8152-71b0-4152-9a86-6acb6ed91e04" containerName="registry-server" probeResult="failure" output=< Jan 07 03:45:23 crc kubenswrapper[4980]: timeout: failed to connect service ":50051" within 1s Jan 07 03:45:23 crc kubenswrapper[4980]: > Jan 07 03:45:25 crc kubenswrapper[4980]: I0107 03:45:25.924143 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-65b595589f-ljh57"] Jan 07 03:45:25 crc kubenswrapper[4980]: E0107 03:45:25.924733 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8424289d-d257-486f-a82a-8d8cec374808" containerName="pull" Jan 07 03:45:25 crc kubenswrapper[4980]: I0107 03:45:25.924749 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8424289d-d257-486f-a82a-8d8cec374808" containerName="pull" Jan 07 03:45:25 crc kubenswrapper[4980]: E0107 03:45:25.924766 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abba6549-6787-4424-ac7f-f933e80765a7" containerName="extract-utilities" Jan 07 03:45:25 crc kubenswrapper[4980]: I0107 03:45:25.924775 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="abba6549-6787-4424-ac7f-f933e80765a7" containerName="extract-utilities" Jan 07 03:45:25 crc kubenswrapper[4980]: E0107 03:45:25.924787 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abba6549-6787-4424-ac7f-f933e80765a7" containerName="registry-server" Jan 07 03:45:25 crc kubenswrapper[4980]: I0107 03:45:25.924796 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="abba6549-6787-4424-ac7f-f933e80765a7" containerName="registry-server" Jan 07 03:45:25 crc kubenswrapper[4980]: E0107 03:45:25.924810 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8424289d-d257-486f-a82a-8d8cec374808" containerName="util" Jan 07 03:45:25 crc kubenswrapper[4980]: I0107 03:45:25.924820 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8424289d-d257-486f-a82a-8d8cec374808" containerName="util" Jan 07 03:45:25 crc kubenswrapper[4980]: E0107 03:45:25.924830 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8424289d-d257-486f-a82a-8d8cec374808" containerName="extract" Jan 07 03:45:25 crc kubenswrapper[4980]: I0107 03:45:25.924838 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8424289d-d257-486f-a82a-8d8cec374808" containerName="extract" Jan 07 03:45:25 crc kubenswrapper[4980]: E0107 03:45:25.924855 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abba6549-6787-4424-ac7f-f933e80765a7" containerName="extract-content" Jan 07 03:45:25 crc kubenswrapper[4980]: I0107 03:45:25.924865 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="abba6549-6787-4424-ac7f-f933e80765a7" containerName="extract-content" Jan 07 03:45:25 crc kubenswrapper[4980]: I0107 03:45:25.925002 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="8424289d-d257-486f-a82a-8d8cec374808" containerName="extract" Jan 07 03:45:25 crc kubenswrapper[4980]: I0107 03:45:25.925023 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="abba6549-6787-4424-ac7f-f933e80765a7" containerName="registry-server" Jan 07 03:45:25 crc kubenswrapper[4980]: I0107 03:45:25.925881 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-65b595589f-ljh57" Jan 07 03:45:25 crc kubenswrapper[4980]: I0107 03:45:25.927967 4980 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-rx6dn" Jan 07 03:45:25 crc kubenswrapper[4980]: I0107 03:45:25.932528 4980 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 07 03:45:25 crc kubenswrapper[4980]: I0107 03:45:25.932659 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 07 03:45:25 crc kubenswrapper[4980]: I0107 03:45:25.932528 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 07 03:45:25 crc kubenswrapper[4980]: I0107 03:45:25.936362 4980 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 07 03:45:25 crc kubenswrapper[4980]: I0107 03:45:25.945121 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-65b595589f-ljh57"] Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.055212 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vqtx\" (UniqueName: \"kubernetes.io/projected/a2680c24-a9d4-4daa-9ed0-3bc391695662-kube-api-access-5vqtx\") pod \"metallb-operator-controller-manager-65b595589f-ljh57\" (UID: \"a2680c24-a9d4-4daa-9ed0-3bc391695662\") " pod="metallb-system/metallb-operator-controller-manager-65b595589f-ljh57" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.055310 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a2680c24-a9d4-4daa-9ed0-3bc391695662-webhook-cert\") pod \"metallb-operator-controller-manager-65b595589f-ljh57\" (UID: \"a2680c24-a9d4-4daa-9ed0-3bc391695662\") " pod="metallb-system/metallb-operator-controller-manager-65b595589f-ljh57" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.055334 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a2680c24-a9d4-4daa-9ed0-3bc391695662-apiservice-cert\") pod \"metallb-operator-controller-manager-65b595589f-ljh57\" (UID: \"a2680c24-a9d4-4daa-9ed0-3bc391695662\") " pod="metallb-system/metallb-operator-controller-manager-65b595589f-ljh57" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.156962 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vqtx\" (UniqueName: \"kubernetes.io/projected/a2680c24-a9d4-4daa-9ed0-3bc391695662-kube-api-access-5vqtx\") pod \"metallb-operator-controller-manager-65b595589f-ljh57\" (UID: \"a2680c24-a9d4-4daa-9ed0-3bc391695662\") " pod="metallb-system/metallb-operator-controller-manager-65b595589f-ljh57" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.157445 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a2680c24-a9d4-4daa-9ed0-3bc391695662-webhook-cert\") pod \"metallb-operator-controller-manager-65b595589f-ljh57\" (UID: \"a2680c24-a9d4-4daa-9ed0-3bc391695662\") " pod="metallb-system/metallb-operator-controller-manager-65b595589f-ljh57" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.157469 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a2680c24-a9d4-4daa-9ed0-3bc391695662-apiservice-cert\") pod \"metallb-operator-controller-manager-65b595589f-ljh57\" (UID: \"a2680c24-a9d4-4daa-9ed0-3bc391695662\") " pod="metallb-system/metallb-operator-controller-manager-65b595589f-ljh57" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.168146 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a2680c24-a9d4-4daa-9ed0-3bc391695662-webhook-cert\") pod \"metallb-operator-controller-manager-65b595589f-ljh57\" (UID: \"a2680c24-a9d4-4daa-9ed0-3bc391695662\") " pod="metallb-system/metallb-operator-controller-manager-65b595589f-ljh57" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.180298 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vqtx\" (UniqueName: \"kubernetes.io/projected/a2680c24-a9d4-4daa-9ed0-3bc391695662-kube-api-access-5vqtx\") pod \"metallb-operator-controller-manager-65b595589f-ljh57\" (UID: \"a2680c24-a9d4-4daa-9ed0-3bc391695662\") " pod="metallb-system/metallb-operator-controller-manager-65b595589f-ljh57" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.182107 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a2680c24-a9d4-4daa-9ed0-3bc391695662-apiservice-cert\") pod \"metallb-operator-controller-manager-65b595589f-ljh57\" (UID: \"a2680c24-a9d4-4daa-9ed0-3bc391695662\") " pod="metallb-system/metallb-operator-controller-manager-65b595589f-ljh57" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.253732 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-65b595589f-ljh57" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.389109 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7"] Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.392982 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.394842 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7"] Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.398712 4980 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.398992 4980 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-56gqz" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.399146 4980 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.463317 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db5d65a6-7f55-491f-8ea1-e6f3c1715c00-apiservice-cert\") pod \"metallb-operator-webhook-server-65b4bf7cb4-dm7j7\" (UID: \"db5d65a6-7f55-491f-8ea1-e6f3c1715c00\") " pod="metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.463394 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db5d65a6-7f55-491f-8ea1-e6f3c1715c00-webhook-cert\") pod \"metallb-operator-webhook-server-65b4bf7cb4-dm7j7\" (UID: \"db5d65a6-7f55-491f-8ea1-e6f3c1715c00\") " pod="metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.463444 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2m7t\" (UniqueName: \"kubernetes.io/projected/db5d65a6-7f55-491f-8ea1-e6f3c1715c00-kube-api-access-q2m7t\") pod \"metallb-operator-webhook-server-65b4bf7cb4-dm7j7\" (UID: \"db5d65a6-7f55-491f-8ea1-e6f3c1715c00\") " pod="metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.564056 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db5d65a6-7f55-491f-8ea1-e6f3c1715c00-apiservice-cert\") pod \"metallb-operator-webhook-server-65b4bf7cb4-dm7j7\" (UID: \"db5d65a6-7f55-491f-8ea1-e6f3c1715c00\") " pod="metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.564131 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db5d65a6-7f55-491f-8ea1-e6f3c1715c00-webhook-cert\") pod \"metallb-operator-webhook-server-65b4bf7cb4-dm7j7\" (UID: \"db5d65a6-7f55-491f-8ea1-e6f3c1715c00\") " pod="metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.564176 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2m7t\" (UniqueName: \"kubernetes.io/projected/db5d65a6-7f55-491f-8ea1-e6f3c1715c00-kube-api-access-q2m7t\") pod \"metallb-operator-webhook-server-65b4bf7cb4-dm7j7\" (UID: \"db5d65a6-7f55-491f-8ea1-e6f3c1715c00\") " pod="metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.572662 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db5d65a6-7f55-491f-8ea1-e6f3c1715c00-webhook-cert\") pod \"metallb-operator-webhook-server-65b4bf7cb4-dm7j7\" (UID: \"db5d65a6-7f55-491f-8ea1-e6f3c1715c00\") " pod="metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.574565 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db5d65a6-7f55-491f-8ea1-e6f3c1715c00-apiservice-cert\") pod \"metallb-operator-webhook-server-65b4bf7cb4-dm7j7\" (UID: \"db5d65a6-7f55-491f-8ea1-e6f3c1715c00\") " pod="metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.595495 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2m7t\" (UniqueName: \"kubernetes.io/projected/db5d65a6-7f55-491f-8ea1-e6f3c1715c00-kube-api-access-q2m7t\") pod \"metallb-operator-webhook-server-65b4bf7cb4-dm7j7\" (UID: \"db5d65a6-7f55-491f-8ea1-e6f3c1715c00\") " pod="metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7" Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.729990 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-65b595589f-ljh57"] Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.735202 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7" Jan 07 03:45:26 crc kubenswrapper[4980]: W0107 03:45:26.765086 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2680c24_a9d4_4daa_9ed0_3bc391695662.slice/crio-1ce721a3f83b193bf420239f2f5aa39249f1b4110c9fdc19dc690a9a1d3ab136 WatchSource:0}: Error finding container 1ce721a3f83b193bf420239f2f5aa39249f1b4110c9fdc19dc690a9a1d3ab136: Status 404 returned error can't find the container with id 1ce721a3f83b193bf420239f2f5aa39249f1b4110c9fdc19dc690a9a1d3ab136 Jan 07 03:45:26 crc kubenswrapper[4980]: I0107 03:45:26.986394 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7"] Jan 07 03:45:26 crc kubenswrapper[4980]: W0107 03:45:26.992766 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb5d65a6_7f55_491f_8ea1_e6f3c1715c00.slice/crio-f60111e7526b210f538d500da33078a51540ff94797ec95cc4a68d9f15c0ae48 WatchSource:0}: Error finding container f60111e7526b210f538d500da33078a51540ff94797ec95cc4a68d9f15c0ae48: Status 404 returned error can't find the container with id f60111e7526b210f538d500da33078a51540ff94797ec95cc4a68d9f15c0ae48 Jan 07 03:45:27 crc kubenswrapper[4980]: I0107 03:45:27.610940 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-65b595589f-ljh57" event={"ID":"a2680c24-a9d4-4daa-9ed0-3bc391695662","Type":"ContainerStarted","Data":"1ce721a3f83b193bf420239f2f5aa39249f1b4110c9fdc19dc690a9a1d3ab136"} Jan 07 03:45:27 crc kubenswrapper[4980]: I0107 03:45:27.613107 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7" event={"ID":"db5d65a6-7f55-491f-8ea1-e6f3c1715c00","Type":"ContainerStarted","Data":"f60111e7526b210f538d500da33078a51540ff94797ec95cc4a68d9f15c0ae48"} Jan 07 03:45:32 crc kubenswrapper[4980]: I0107 03:45:32.640457 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nsf7b" Jan 07 03:45:32 crc kubenswrapper[4980]: I0107 03:45:32.691660 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nsf7b" Jan 07 03:45:32 crc kubenswrapper[4980]: I0107 03:45:32.881282 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nsf7b"] Jan 07 03:45:34 crc kubenswrapper[4980]: I0107 03:45:34.658647 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nsf7b" podUID="4bfc8152-71b0-4152-9a86-6acb6ed91e04" containerName="registry-server" containerID="cri-o://bb2b5d03bb3d46ff1958500d2ec63d02b1ceebfede403ae3d13f58dd29b9d4fe" gracePeriod=2 Jan 07 03:45:35 crc kubenswrapper[4980]: I0107 03:45:35.671710 4980 generic.go:334] "Generic (PLEG): container finished" podID="4bfc8152-71b0-4152-9a86-6acb6ed91e04" containerID="bb2b5d03bb3d46ff1958500d2ec63d02b1ceebfede403ae3d13f58dd29b9d4fe" exitCode=0 Jan 07 03:45:35 crc kubenswrapper[4980]: I0107 03:45:35.671776 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nsf7b" event={"ID":"4bfc8152-71b0-4152-9a86-6acb6ed91e04","Type":"ContainerDied","Data":"bb2b5d03bb3d46ff1958500d2ec63d02b1ceebfede403ae3d13f58dd29b9d4fe"} Jan 07 03:45:37 crc kubenswrapper[4980]: I0107 03:45:37.777858 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nsf7b" Jan 07 03:45:37 crc kubenswrapper[4980]: I0107 03:45:37.832981 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrkrf\" (UniqueName: \"kubernetes.io/projected/4bfc8152-71b0-4152-9a86-6acb6ed91e04-kube-api-access-rrkrf\") pod \"4bfc8152-71b0-4152-9a86-6acb6ed91e04\" (UID: \"4bfc8152-71b0-4152-9a86-6acb6ed91e04\") " Jan 07 03:45:37 crc kubenswrapper[4980]: I0107 03:45:37.833107 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bfc8152-71b0-4152-9a86-6acb6ed91e04-catalog-content\") pod \"4bfc8152-71b0-4152-9a86-6acb6ed91e04\" (UID: \"4bfc8152-71b0-4152-9a86-6acb6ed91e04\") " Jan 07 03:45:37 crc kubenswrapper[4980]: I0107 03:45:37.833291 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bfc8152-71b0-4152-9a86-6acb6ed91e04-utilities\") pod \"4bfc8152-71b0-4152-9a86-6acb6ed91e04\" (UID: \"4bfc8152-71b0-4152-9a86-6acb6ed91e04\") " Jan 07 03:45:37 crc kubenswrapper[4980]: I0107 03:45:37.834541 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bfc8152-71b0-4152-9a86-6acb6ed91e04-utilities" (OuterVolumeSpecName: "utilities") pod "4bfc8152-71b0-4152-9a86-6acb6ed91e04" (UID: "4bfc8152-71b0-4152-9a86-6acb6ed91e04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:45:37 crc kubenswrapper[4980]: I0107 03:45:37.843809 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bfc8152-71b0-4152-9a86-6acb6ed91e04-kube-api-access-rrkrf" (OuterVolumeSpecName: "kube-api-access-rrkrf") pod "4bfc8152-71b0-4152-9a86-6acb6ed91e04" (UID: "4bfc8152-71b0-4152-9a86-6acb6ed91e04"). InnerVolumeSpecName "kube-api-access-rrkrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:45:37 crc kubenswrapper[4980]: I0107 03:45:37.935525 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bfc8152-71b0-4152-9a86-6acb6ed91e04-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:37 crc kubenswrapper[4980]: I0107 03:45:37.935588 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrkrf\" (UniqueName: \"kubernetes.io/projected/4bfc8152-71b0-4152-9a86-6acb6ed91e04-kube-api-access-rrkrf\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:37 crc kubenswrapper[4980]: I0107 03:45:37.944469 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bfc8152-71b0-4152-9a86-6acb6ed91e04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bfc8152-71b0-4152-9a86-6acb6ed91e04" (UID: "4bfc8152-71b0-4152-9a86-6acb6ed91e04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:45:38 crc kubenswrapper[4980]: I0107 03:45:38.037168 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bfc8152-71b0-4152-9a86-6acb6ed91e04-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:45:38 crc kubenswrapper[4980]: I0107 03:45:38.695448 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-65b595589f-ljh57" event={"ID":"a2680c24-a9d4-4daa-9ed0-3bc391695662","Type":"ContainerStarted","Data":"6c67ad6c50dfe3cb799f856d32cde067ceec6212dba2c1e77ae1405edfac4600"} Jan 07 03:45:38 crc kubenswrapper[4980]: I0107 03:45:38.696196 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-65b595589f-ljh57" Jan 07 03:45:38 crc kubenswrapper[4980]: I0107 03:45:38.704388 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7" event={"ID":"db5d65a6-7f55-491f-8ea1-e6f3c1715c00","Type":"ContainerStarted","Data":"ba7319305810df59934ec70d7c08883fd716c22e7dc1400af0128c5ccde2eb91"} Jan 07 03:45:38 crc kubenswrapper[4980]: I0107 03:45:38.706333 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7" Jan 07 03:45:38 crc kubenswrapper[4980]: I0107 03:45:38.715848 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nsf7b" event={"ID":"4bfc8152-71b0-4152-9a86-6acb6ed91e04","Type":"ContainerDied","Data":"5a03c82f57571db4a2d9d289c34c3e15723ecca23fd2f4a074e95c2cf4291a0c"} Jan 07 03:45:38 crc kubenswrapper[4980]: I0107 03:45:38.716011 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nsf7b" Jan 07 03:45:38 crc kubenswrapper[4980]: I0107 03:45:38.716115 4980 scope.go:117] "RemoveContainer" containerID="bb2b5d03bb3d46ff1958500d2ec63d02b1ceebfede403ae3d13f58dd29b9d4fe" Jan 07 03:45:38 crc kubenswrapper[4980]: I0107 03:45:38.739263 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-65b595589f-ljh57" podStartSLOduration=2.785967658 podStartE2EDuration="13.739234273s" podCreationTimestamp="2026-01-07 03:45:25 +0000 UTC" firstStartedPulling="2026-01-07 03:45:26.772785783 +0000 UTC m=+773.338480518" lastFinishedPulling="2026-01-07 03:45:37.726052358 +0000 UTC m=+784.291747133" observedRunningTime="2026-01-07 03:45:38.73168307 +0000 UTC m=+785.297377835" watchObservedRunningTime="2026-01-07 03:45:38.739234273 +0000 UTC m=+785.304929018" Jan 07 03:45:38 crc kubenswrapper[4980]: I0107 03:45:38.752408 4980 scope.go:117] "RemoveContainer" containerID="6786af3613ea827b358171e78b942542effd485df5c016314859569576237712" Jan 07 03:45:38 crc kubenswrapper[4980]: I0107 03:45:38.765824 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7" podStartSLOduration=2.030467168 podStartE2EDuration="12.765802174s" podCreationTimestamp="2026-01-07 03:45:26 +0000 UTC" firstStartedPulling="2026-01-07 03:45:26.995027325 +0000 UTC m=+773.560722060" lastFinishedPulling="2026-01-07 03:45:37.730362321 +0000 UTC m=+784.296057066" observedRunningTime="2026-01-07 03:45:38.764222136 +0000 UTC m=+785.329916871" watchObservedRunningTime="2026-01-07 03:45:38.765802174 +0000 UTC m=+785.331496919" Jan 07 03:45:38 crc kubenswrapper[4980]: I0107 03:45:38.779437 4980 scope.go:117] "RemoveContainer" containerID="865cfc093a1536186c2d9b6c6c3b6f33c233c7cc87782fe81f26fbd7c8ff191f" Jan 07 03:45:38 crc kubenswrapper[4980]: I0107 03:45:38.808171 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nsf7b"] Jan 07 03:45:38 crc kubenswrapper[4980]: I0107 03:45:38.818413 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nsf7b"] Jan 07 03:45:39 crc kubenswrapper[4980]: I0107 03:45:39.743607 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bfc8152-71b0-4152-9a86-6acb6ed91e04" path="/var/lib/kubelet/pods/4bfc8152-71b0-4152-9a86-6acb6ed91e04/volumes" Jan 07 03:45:56 crc kubenswrapper[4980]: I0107 03:45:56.743315 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-65b4bf7cb4-dm7j7" Jan 07 03:46:16 crc kubenswrapper[4980]: I0107 03:46:16.257319 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-65b595589f-ljh57" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.164442 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-24j8g"] Jan 07 03:46:17 crc kubenswrapper[4980]: E0107 03:46:17.164804 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bfc8152-71b0-4152-9a86-6acb6ed91e04" containerName="extract-utilities" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.164828 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bfc8152-71b0-4152-9a86-6acb6ed91e04" containerName="extract-utilities" Jan 07 03:46:17 crc kubenswrapper[4980]: E0107 03:46:17.164840 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bfc8152-71b0-4152-9a86-6acb6ed91e04" containerName="extract-content" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.164850 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bfc8152-71b0-4152-9a86-6acb6ed91e04" containerName="extract-content" Jan 07 03:46:17 crc kubenswrapper[4980]: E0107 03:46:17.164866 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bfc8152-71b0-4152-9a86-6acb6ed91e04" containerName="registry-server" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.164875 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bfc8152-71b0-4152-9a86-6acb6ed91e04" containerName="registry-server" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.165008 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bfc8152-71b0-4152-9a86-6acb6ed91e04" containerName="registry-server" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.167312 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.169269 4980 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-q7dtl" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.170188 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.172043 4980 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.178624 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-bn2x7"] Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.179285 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bn2x7" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.186925 4980 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.196710 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-bn2x7"] Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.278320 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-7qwgg"] Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.279502 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7qwgg" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.281924 4980 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.281966 4980 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.282466 4980 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-cxgjr" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.282637 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.287399 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-vg2gl"] Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.288245 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-vg2gl" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.289505 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/3f131e38-245d-400d-8a7b-f9c7dc486db8-frr-conf\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.289545 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/3f131e38-245d-400d-8a7b-f9c7dc486db8-metrics\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.289601 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwtnw\" (UniqueName: \"kubernetes.io/projected/dc6c1183-144b-4b67-baad-9e04c4492453-kube-api-access-kwtnw\") pod \"frr-k8s-webhook-server-7784b6fcf-bn2x7\" (UID: \"dc6c1183-144b-4b67-baad-9e04c4492453\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bn2x7" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.289638 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc6c1183-144b-4b67-baad-9e04c4492453-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-bn2x7\" (UID: \"dc6c1183-144b-4b67-baad-9e04c4492453\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bn2x7" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.289669 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l75h\" (UniqueName: \"kubernetes.io/projected/3f131e38-245d-400d-8a7b-f9c7dc486db8-kube-api-access-8l75h\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.289697 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/3f131e38-245d-400d-8a7b-f9c7dc486db8-reloader\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.289727 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/3f131e38-245d-400d-8a7b-f9c7dc486db8-frr-startup\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.289754 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/3f131e38-245d-400d-8a7b-f9c7dc486db8-frr-sockets\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.289781 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3f131e38-245d-400d-8a7b-f9c7dc486db8-metrics-certs\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.292472 4980 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.331035 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-vg2gl"] Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.394289 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l75h\" (UniqueName: \"kubernetes.io/projected/3f131e38-245d-400d-8a7b-f9c7dc486db8-kube-api-access-8l75h\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.394342 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/3f131e38-245d-400d-8a7b-f9c7dc486db8-reloader\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.394370 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/3f131e38-245d-400d-8a7b-f9c7dc486db8-frr-startup\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.394391 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/3f131e38-245d-400d-8a7b-f9c7dc486db8-frr-sockets\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.394411 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3-metrics-certs\") pod \"controller-5bddd4b946-vg2gl\" (UID: \"fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3\") " pod="metallb-system/controller-5bddd4b946-vg2gl" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.394427 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3-cert\") pod \"controller-5bddd4b946-vg2gl\" (UID: \"fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3\") " pod="metallb-system/controller-5bddd4b946-vg2gl" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.394442 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3f131e38-245d-400d-8a7b-f9c7dc486db8-metrics-certs\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.394463 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bf49087b-cf7a-41cf-85a4-e76d00ae1381-memberlist\") pod \"speaker-7qwgg\" (UID: \"bf49087b-cf7a-41cf-85a4-e76d00ae1381\") " pod="metallb-system/speaker-7qwgg" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.394487 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/3f131e38-245d-400d-8a7b-f9c7dc486db8-frr-conf\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.394503 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/3f131e38-245d-400d-8a7b-f9c7dc486db8-metrics\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.394518 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf49087b-cf7a-41cf-85a4-e76d00ae1381-metrics-certs\") pod \"speaker-7qwgg\" (UID: \"bf49087b-cf7a-41cf-85a4-e76d00ae1381\") " pod="metallb-system/speaker-7qwgg" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.394534 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6xgj\" (UniqueName: \"kubernetes.io/projected/fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3-kube-api-access-j6xgj\") pod \"controller-5bddd4b946-vg2gl\" (UID: \"fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3\") " pod="metallb-system/controller-5bddd4b946-vg2gl" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.394612 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bf49087b-cf7a-41cf-85a4-e76d00ae1381-metallb-excludel2\") pod \"speaker-7qwgg\" (UID: \"bf49087b-cf7a-41cf-85a4-e76d00ae1381\") " pod="metallb-system/speaker-7qwgg" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.394633 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2c6w\" (UniqueName: \"kubernetes.io/projected/bf49087b-cf7a-41cf-85a4-e76d00ae1381-kube-api-access-z2c6w\") pod \"speaker-7qwgg\" (UID: \"bf49087b-cf7a-41cf-85a4-e76d00ae1381\") " pod="metallb-system/speaker-7qwgg" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.394651 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwtnw\" (UniqueName: \"kubernetes.io/projected/dc6c1183-144b-4b67-baad-9e04c4492453-kube-api-access-kwtnw\") pod \"frr-k8s-webhook-server-7784b6fcf-bn2x7\" (UID: \"dc6c1183-144b-4b67-baad-9e04c4492453\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bn2x7" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.394681 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc6c1183-144b-4b67-baad-9e04c4492453-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-bn2x7\" (UID: \"dc6c1183-144b-4b67-baad-9e04c4492453\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bn2x7" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.396011 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/3f131e38-245d-400d-8a7b-f9c7dc486db8-reloader\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.396081 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/3f131e38-245d-400d-8a7b-f9c7dc486db8-frr-conf\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.396153 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/3f131e38-245d-400d-8a7b-f9c7dc486db8-metrics\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.396396 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/3f131e38-245d-400d-8a7b-f9c7dc486db8-frr-startup\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.396398 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/3f131e38-245d-400d-8a7b-f9c7dc486db8-frr-sockets\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.412346 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc6c1183-144b-4b67-baad-9e04c4492453-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-bn2x7\" (UID: \"dc6c1183-144b-4b67-baad-9e04c4492453\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bn2x7" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.412351 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3f131e38-245d-400d-8a7b-f9c7dc486db8-metrics-certs\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.415731 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwtnw\" (UniqueName: \"kubernetes.io/projected/dc6c1183-144b-4b67-baad-9e04c4492453-kube-api-access-kwtnw\") pod \"frr-k8s-webhook-server-7784b6fcf-bn2x7\" (UID: \"dc6c1183-144b-4b67-baad-9e04c4492453\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bn2x7" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.416922 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l75h\" (UniqueName: \"kubernetes.io/projected/3f131e38-245d-400d-8a7b-f9c7dc486db8-kube-api-access-8l75h\") pod \"frr-k8s-24j8g\" (UID: \"3f131e38-245d-400d-8a7b-f9c7dc486db8\") " pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.486442 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.496217 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3-metrics-certs\") pod \"controller-5bddd4b946-vg2gl\" (UID: \"fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3\") " pod="metallb-system/controller-5bddd4b946-vg2gl" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.496267 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3-cert\") pod \"controller-5bddd4b946-vg2gl\" (UID: \"fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3\") " pod="metallb-system/controller-5bddd4b946-vg2gl" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.496293 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bf49087b-cf7a-41cf-85a4-e76d00ae1381-memberlist\") pod \"speaker-7qwgg\" (UID: \"bf49087b-cf7a-41cf-85a4-e76d00ae1381\") " pod="metallb-system/speaker-7qwgg" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.496323 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf49087b-cf7a-41cf-85a4-e76d00ae1381-metrics-certs\") pod \"speaker-7qwgg\" (UID: \"bf49087b-cf7a-41cf-85a4-e76d00ae1381\") " pod="metallb-system/speaker-7qwgg" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.496341 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6xgj\" (UniqueName: \"kubernetes.io/projected/fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3-kube-api-access-j6xgj\") pod \"controller-5bddd4b946-vg2gl\" (UID: \"fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3\") " pod="metallb-system/controller-5bddd4b946-vg2gl" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.496363 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bf49087b-cf7a-41cf-85a4-e76d00ae1381-metallb-excludel2\") pod \"speaker-7qwgg\" (UID: \"bf49087b-cf7a-41cf-85a4-e76d00ae1381\") " pod="metallb-system/speaker-7qwgg" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.496379 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2c6w\" (UniqueName: \"kubernetes.io/projected/bf49087b-cf7a-41cf-85a4-e76d00ae1381-kube-api-access-z2c6w\") pod \"speaker-7qwgg\" (UID: \"bf49087b-cf7a-41cf-85a4-e76d00ae1381\") " pod="metallb-system/speaker-7qwgg" Jan 07 03:46:17 crc kubenswrapper[4980]: E0107 03:46:17.496485 4980 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 07 03:46:17 crc kubenswrapper[4980]: E0107 03:46:17.496612 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf49087b-cf7a-41cf-85a4-e76d00ae1381-memberlist podName:bf49087b-cf7a-41cf-85a4-e76d00ae1381 nodeName:}" failed. No retries permitted until 2026-01-07 03:46:17.996577973 +0000 UTC m=+824.562272718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bf49087b-cf7a-41cf-85a4-e76d00ae1381-memberlist") pod "speaker-7qwgg" (UID: "bf49087b-cf7a-41cf-85a4-e76d00ae1381") : secret "metallb-memberlist" not found Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.497343 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bf49087b-cf7a-41cf-85a4-e76d00ae1381-metallb-excludel2\") pod \"speaker-7qwgg\" (UID: \"bf49087b-cf7a-41cf-85a4-e76d00ae1381\") " pod="metallb-system/speaker-7qwgg" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.499337 4980 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.501432 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3-metrics-certs\") pod \"controller-5bddd4b946-vg2gl\" (UID: \"fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3\") " pod="metallb-system/controller-5bddd4b946-vg2gl" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.501767 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bn2x7" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.502229 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf49087b-cf7a-41cf-85a4-e76d00ae1381-metrics-certs\") pod \"speaker-7qwgg\" (UID: \"bf49087b-cf7a-41cf-85a4-e76d00ae1381\") " pod="metallb-system/speaker-7qwgg" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.511574 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3-cert\") pod \"controller-5bddd4b946-vg2gl\" (UID: \"fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3\") " pod="metallb-system/controller-5bddd4b946-vg2gl" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.517185 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2c6w\" (UniqueName: \"kubernetes.io/projected/bf49087b-cf7a-41cf-85a4-e76d00ae1381-kube-api-access-z2c6w\") pod \"speaker-7qwgg\" (UID: \"bf49087b-cf7a-41cf-85a4-e76d00ae1381\") " pod="metallb-system/speaker-7qwgg" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.520422 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6xgj\" (UniqueName: \"kubernetes.io/projected/fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3-kube-api-access-j6xgj\") pod \"controller-5bddd4b946-vg2gl\" (UID: \"fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3\") " pod="metallb-system/controller-5bddd4b946-vg2gl" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.618758 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-vg2gl" Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.823449 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-vg2gl"] Jan 07 03:46:17 crc kubenswrapper[4980]: W0107 03:46:17.829888 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc1d0de6_6d92_4b24_b71b_7f2f04a93ca3.slice/crio-f3baf14399f1e3ab88e3fbe3417c109b2a193080a28e46cd22e812afa3a30b16 WatchSource:0}: Error finding container f3baf14399f1e3ab88e3fbe3417c109b2a193080a28e46cd22e812afa3a30b16: Status 404 returned error can't find the container with id f3baf14399f1e3ab88e3fbe3417c109b2a193080a28e46cd22e812afa3a30b16 Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.981124 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-vg2gl" event={"ID":"fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3","Type":"ContainerStarted","Data":"f3baf14399f1e3ab88e3fbe3417c109b2a193080a28e46cd22e812afa3a30b16"} Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.982110 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-24j8g" event={"ID":"3f131e38-245d-400d-8a7b-f9c7dc486db8","Type":"ContainerStarted","Data":"bf6af06a96ebae8720c2f4a4c471e66ef56f9ff2d2d41c65834662a444556379"} Jan 07 03:46:17 crc kubenswrapper[4980]: I0107 03:46:17.984472 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-bn2x7"] Jan 07 03:46:17 crc kubenswrapper[4980]: W0107 03:46:17.989668 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc6c1183_144b_4b67_baad_9e04c4492453.slice/crio-01e7d31e8c5964cebd55fac6a4bf8d685a9e31cea7a826e2dfde40fdb0fe779d WatchSource:0}: Error finding container 01e7d31e8c5964cebd55fac6a4bf8d685a9e31cea7a826e2dfde40fdb0fe779d: Status 404 returned error can't find the container with id 01e7d31e8c5964cebd55fac6a4bf8d685a9e31cea7a826e2dfde40fdb0fe779d Jan 07 03:46:18 crc kubenswrapper[4980]: I0107 03:46:18.003986 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bf49087b-cf7a-41cf-85a4-e76d00ae1381-memberlist\") pod \"speaker-7qwgg\" (UID: \"bf49087b-cf7a-41cf-85a4-e76d00ae1381\") " pod="metallb-system/speaker-7qwgg" Jan 07 03:46:18 crc kubenswrapper[4980]: E0107 03:46:18.004262 4980 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 07 03:46:18 crc kubenswrapper[4980]: E0107 03:46:18.004348 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf49087b-cf7a-41cf-85a4-e76d00ae1381-memberlist podName:bf49087b-cf7a-41cf-85a4-e76d00ae1381 nodeName:}" failed. No retries permitted until 2026-01-07 03:46:19.004325706 +0000 UTC m=+825.570020481 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bf49087b-cf7a-41cf-85a4-e76d00ae1381-memberlist") pod "speaker-7qwgg" (UID: "bf49087b-cf7a-41cf-85a4-e76d00ae1381") : secret "metallb-memberlist" not found Jan 07 03:46:18 crc kubenswrapper[4980]: I0107 03:46:18.991816 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bn2x7" event={"ID":"dc6c1183-144b-4b67-baad-9e04c4492453","Type":"ContainerStarted","Data":"01e7d31e8c5964cebd55fac6a4bf8d685a9e31cea7a826e2dfde40fdb0fe779d"} Jan 07 03:46:18 crc kubenswrapper[4980]: I0107 03:46:18.994007 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-vg2gl" event={"ID":"fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3","Type":"ContainerStarted","Data":"13b15426d5e31501ae4d26f3028c4446b248e2e87fe427f8b66b8cb0d3a13472"} Jan 07 03:46:18 crc kubenswrapper[4980]: I0107 03:46:18.994049 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-vg2gl" event={"ID":"fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3","Type":"ContainerStarted","Data":"49714b1afeb4c183134604aa0725b0e53a3e5c458fdfb3c026d83e7825943f87"} Jan 07 03:46:18 crc kubenswrapper[4980]: I0107 03:46:18.995314 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-vg2gl" Jan 07 03:46:19 crc kubenswrapper[4980]: I0107 03:46:19.021407 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-vg2gl" podStartSLOduration=2.021383781 podStartE2EDuration="2.021383781s" podCreationTimestamp="2026-01-07 03:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:46:19.0177975 +0000 UTC m=+825.583492235" watchObservedRunningTime="2026-01-07 03:46:19.021383781 +0000 UTC m=+825.587078516" Jan 07 03:46:19 crc kubenswrapper[4980]: I0107 03:46:19.024538 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bf49087b-cf7a-41cf-85a4-e76d00ae1381-memberlist\") pod \"speaker-7qwgg\" (UID: \"bf49087b-cf7a-41cf-85a4-e76d00ae1381\") " pod="metallb-system/speaker-7qwgg" Jan 07 03:46:19 crc kubenswrapper[4980]: I0107 03:46:19.034397 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bf49087b-cf7a-41cf-85a4-e76d00ae1381-memberlist\") pod \"speaker-7qwgg\" (UID: \"bf49087b-cf7a-41cf-85a4-e76d00ae1381\") " pod="metallb-system/speaker-7qwgg" Jan 07 03:46:19 crc kubenswrapper[4980]: I0107 03:46:19.105032 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7qwgg" Jan 07 03:46:19 crc kubenswrapper[4980]: W0107 03:46:19.138907 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf49087b_cf7a_41cf_85a4_e76d00ae1381.slice/crio-c3c9366a97b92d098a47aa87973c6a054641a07771cff48981b67241d5424912 WatchSource:0}: Error finding container c3c9366a97b92d098a47aa87973c6a054641a07771cff48981b67241d5424912: Status 404 returned error can't find the container with id c3c9366a97b92d098a47aa87973c6a054641a07771cff48981b67241d5424912 Jan 07 03:46:20 crc kubenswrapper[4980]: I0107 03:46:20.001622 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7qwgg" event={"ID":"bf49087b-cf7a-41cf-85a4-e76d00ae1381","Type":"ContainerStarted","Data":"54744897cd63b0adeee917a415cc40199a4f8c324df9a0fad269f46d770f8816"} Jan 07 03:46:20 crc kubenswrapper[4980]: I0107 03:46:20.001754 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7qwgg" event={"ID":"bf49087b-cf7a-41cf-85a4-e76d00ae1381","Type":"ContainerStarted","Data":"c3c9366a97b92d098a47aa87973c6a054641a07771cff48981b67241d5424912"} Jan 07 03:46:21 crc kubenswrapper[4980]: I0107 03:46:21.011424 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7qwgg" event={"ID":"bf49087b-cf7a-41cf-85a4-e76d00ae1381","Type":"ContainerStarted","Data":"0838152ca1ec14ee1f9bccc9cb5522dc528074c9b92fad212ca6de3859cc0137"} Jan 07 03:46:22 crc kubenswrapper[4980]: I0107 03:46:22.023327 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-7qwgg" Jan 07 03:46:23 crc kubenswrapper[4980]: I0107 03:46:23.768356 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-7qwgg" podStartSLOduration=6.768328448 podStartE2EDuration="6.768328448s" podCreationTimestamp="2026-01-07 03:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:46:21.037157384 +0000 UTC m=+827.602852119" watchObservedRunningTime="2026-01-07 03:46:23.768328448 +0000 UTC m=+830.334023213" Jan 07 03:46:27 crc kubenswrapper[4980]: I0107 03:46:27.624141 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-vg2gl" Jan 07 03:46:29 crc kubenswrapper[4980]: I0107 03:46:29.110687 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-7qwgg" Jan 07 03:46:30 crc kubenswrapper[4980]: I0107 03:46:30.082467 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bn2x7" event={"ID":"dc6c1183-144b-4b67-baad-9e04c4492453","Type":"ContainerStarted","Data":"64e5f54c57296174864322faa9742df8137f3b8d2125b84475bc0735f1d5575f"} Jan 07 03:46:30 crc kubenswrapper[4980]: I0107 03:46:30.082637 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bn2x7" Jan 07 03:46:30 crc kubenswrapper[4980]: I0107 03:46:30.085657 4980 generic.go:334] "Generic (PLEG): container finished" podID="3f131e38-245d-400d-8a7b-f9c7dc486db8" containerID="addf58ada8b0635115f53c8ec5b7343ee4dec74e738c1e2301fc3ccf685024fe" exitCode=0 Jan 07 03:46:30 crc kubenswrapper[4980]: I0107 03:46:30.085759 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-24j8g" event={"ID":"3f131e38-245d-400d-8a7b-f9c7dc486db8","Type":"ContainerDied","Data":"addf58ada8b0635115f53c8ec5b7343ee4dec74e738c1e2301fc3ccf685024fe"} Jan 07 03:46:30 crc kubenswrapper[4980]: I0107 03:46:30.104939 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bn2x7" podStartSLOduration=2.399971542 podStartE2EDuration="13.104914037s" podCreationTimestamp="2026-01-07 03:46:17 +0000 UTC" firstStartedPulling="2026-01-07 03:46:17.993859062 +0000 UTC m=+824.559553807" lastFinishedPulling="2026-01-07 03:46:28.698801527 +0000 UTC m=+835.264496302" observedRunningTime="2026-01-07 03:46:30.104586317 +0000 UTC m=+836.670281092" watchObservedRunningTime="2026-01-07 03:46:30.104914037 +0000 UTC m=+836.670608812" Jan 07 03:46:32 crc kubenswrapper[4980]: I0107 03:46:32.110443 4980 generic.go:334] "Generic (PLEG): container finished" podID="3f131e38-245d-400d-8a7b-f9c7dc486db8" containerID="28bb6e7ea8f78672e5bd44e248eca5c27772bc93b8edd668dcbc52a75fc54e72" exitCode=0 Jan 07 03:46:32 crc kubenswrapper[4980]: I0107 03:46:32.110586 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-24j8g" event={"ID":"3f131e38-245d-400d-8a7b-f9c7dc486db8","Type":"ContainerDied","Data":"28bb6e7ea8f78672e5bd44e248eca5c27772bc93b8edd668dcbc52a75fc54e72"} Jan 07 03:46:32 crc kubenswrapper[4980]: I0107 03:46:32.311939 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-q54qd"] Jan 07 03:46:32 crc kubenswrapper[4980]: I0107 03:46:32.313249 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q54qd" Jan 07 03:46:32 crc kubenswrapper[4980]: I0107 03:46:32.320914 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 07 03:46:32 crc kubenswrapper[4980]: I0107 03:46:32.324301 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-xzxt8" Jan 07 03:46:32 crc kubenswrapper[4980]: I0107 03:46:32.324308 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 07 03:46:32 crc kubenswrapper[4980]: I0107 03:46:32.327280 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-q54qd"] Jan 07 03:46:32 crc kubenswrapper[4980]: I0107 03:46:32.490667 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2hkz\" (UniqueName: \"kubernetes.io/projected/5cb72193-59c1-49a9-b1fd-26191d36f265-kube-api-access-h2hkz\") pod \"openstack-operator-index-q54qd\" (UID: \"5cb72193-59c1-49a9-b1fd-26191d36f265\") " pod="openstack-operators/openstack-operator-index-q54qd" Jan 07 03:46:32 crc kubenswrapper[4980]: I0107 03:46:32.592093 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2hkz\" (UniqueName: \"kubernetes.io/projected/5cb72193-59c1-49a9-b1fd-26191d36f265-kube-api-access-h2hkz\") pod \"openstack-operator-index-q54qd\" (UID: \"5cb72193-59c1-49a9-b1fd-26191d36f265\") " pod="openstack-operators/openstack-operator-index-q54qd" Jan 07 03:46:32 crc kubenswrapper[4980]: I0107 03:46:32.620698 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2hkz\" (UniqueName: \"kubernetes.io/projected/5cb72193-59c1-49a9-b1fd-26191d36f265-kube-api-access-h2hkz\") pod \"openstack-operator-index-q54qd\" (UID: \"5cb72193-59c1-49a9-b1fd-26191d36f265\") " pod="openstack-operators/openstack-operator-index-q54qd" Jan 07 03:46:32 crc kubenswrapper[4980]: I0107 03:46:32.636405 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q54qd" Jan 07 03:46:33 crc kubenswrapper[4980]: I0107 03:46:33.126348 4980 generic.go:334] "Generic (PLEG): container finished" podID="3f131e38-245d-400d-8a7b-f9c7dc486db8" containerID="408ef0925fbda52563ece011e10580d5ca784aedd3b2fadb87749893fba05e4b" exitCode=0 Jan 07 03:46:33 crc kubenswrapper[4980]: I0107 03:46:33.126465 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-24j8g" event={"ID":"3f131e38-245d-400d-8a7b-f9c7dc486db8","Type":"ContainerDied","Data":"408ef0925fbda52563ece011e10580d5ca784aedd3b2fadb87749893fba05e4b"} Jan 07 03:46:33 crc kubenswrapper[4980]: I0107 03:46:33.178396 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-q54qd"] Jan 07 03:46:34 crc kubenswrapper[4980]: I0107 03:46:34.148842 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q54qd" event={"ID":"5cb72193-59c1-49a9-b1fd-26191d36f265","Type":"ContainerStarted","Data":"cd16f2b4d02fa5a25c0a6e6ea6aa3b5721056c5308141895384877887ac800a0"} Jan 07 03:46:34 crc kubenswrapper[4980]: I0107 03:46:34.152497 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-24j8g" event={"ID":"3f131e38-245d-400d-8a7b-f9c7dc486db8","Type":"ContainerStarted","Data":"5950291e2adec5eaba460fa24dcaaa3e27e02705bddaa17f2cac07c6474b1533"} Jan 07 03:46:34 crc kubenswrapper[4980]: I0107 03:46:34.152574 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-24j8g" event={"ID":"3f131e38-245d-400d-8a7b-f9c7dc486db8","Type":"ContainerStarted","Data":"33fb216f59dd5faad5e5478718cba73a10dbba4282233e7cb7734d9d160672e5"} Jan 07 03:46:35 crc kubenswrapper[4980]: I0107 03:46:35.164317 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-24j8g" event={"ID":"3f131e38-245d-400d-8a7b-f9c7dc486db8","Type":"ContainerStarted","Data":"c2b6ce06d6cf7b9406cd0c04ba485c4ff7c1d5fa0c70e460f6def4124f6b4e18"} Jan 07 03:46:35 crc kubenswrapper[4980]: I0107 03:46:35.164392 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-24j8g" event={"ID":"3f131e38-245d-400d-8a7b-f9c7dc486db8","Type":"ContainerStarted","Data":"0cdd7824abf48e72728f0235ff2cab4dcac9a7c3680328b67ae74340c99a01f6"} Jan 07 03:46:36 crc kubenswrapper[4980]: I0107 03:46:36.175068 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-24j8g" event={"ID":"3f131e38-245d-400d-8a7b-f9c7dc486db8","Type":"ContainerStarted","Data":"f6111fd1a3200458345f62d6ffcc717d3ee3725494106dbfe65a2fa16b7cd62c"} Jan 07 03:46:36 crc kubenswrapper[4980]: I0107 03:46:36.175367 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:36 crc kubenswrapper[4980]: I0107 03:46:36.175377 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-24j8g" event={"ID":"3f131e38-245d-400d-8a7b-f9c7dc486db8","Type":"ContainerStarted","Data":"64314cfd20890b573e5285e3e6b901da24eaaf535ff8460f3567fe2bbe4a8ad8"} Jan 07 03:46:36 crc kubenswrapper[4980]: I0107 03:46:36.202662 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-24j8g" podStartSLOduration=8.317825529 podStartE2EDuration="19.202627363s" podCreationTimestamp="2026-01-07 03:46:17 +0000 UTC" firstStartedPulling="2026-01-07 03:46:17.773330117 +0000 UTC m=+824.339024852" lastFinishedPulling="2026-01-07 03:46:28.658131921 +0000 UTC m=+835.223826686" observedRunningTime="2026-01-07 03:46:36.195660778 +0000 UTC m=+842.761355533" watchObservedRunningTime="2026-01-07 03:46:36.202627363 +0000 UTC m=+842.768322098" Jan 07 03:46:37 crc kubenswrapper[4980]: I0107 03:46:37.487075 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:37 crc kubenswrapper[4980]: I0107 03:46:37.535604 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:38 crc kubenswrapper[4980]: I0107 03:46:38.189755 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q54qd" event={"ID":"5cb72193-59c1-49a9-b1fd-26191d36f265","Type":"ContainerStarted","Data":"8252c6eedec35e5f2ba0f91d32e714b1fe14f88893a967140ab8dca14d1bd072"} Jan 07 03:46:38 crc kubenswrapper[4980]: I0107 03:46:38.235509 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-q54qd" podStartSLOduration=2.365958777 podStartE2EDuration="6.235486714s" podCreationTimestamp="2026-01-07 03:46:32 +0000 UTC" firstStartedPulling="2026-01-07 03:46:33.21393788 +0000 UTC m=+839.779632625" lastFinishedPulling="2026-01-07 03:46:37.083465827 +0000 UTC m=+843.649160562" observedRunningTime="2026-01-07 03:46:38.232081529 +0000 UTC m=+844.797776264" watchObservedRunningTime="2026-01-07 03:46:38.235486714 +0000 UTC m=+844.801181459" Jan 07 03:46:42 crc kubenswrapper[4980]: I0107 03:46:42.637466 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-q54qd" Jan 07 03:46:42 crc kubenswrapper[4980]: I0107 03:46:42.637962 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-q54qd" Jan 07 03:46:42 crc kubenswrapper[4980]: I0107 03:46:42.672793 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-q54qd" Jan 07 03:46:43 crc kubenswrapper[4980]: I0107 03:46:43.271966 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-q54qd" Jan 07 03:46:47 crc kubenswrapper[4980]: I0107 03:46:47.490318 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-24j8g" Jan 07 03:46:47 crc kubenswrapper[4980]: I0107 03:46:47.511580 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bn2x7" Jan 07 03:46:48 crc kubenswrapper[4980]: I0107 03:46:48.794085 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k"] Jan 07 03:46:48 crc kubenswrapper[4980]: I0107 03:46:48.796005 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k" Jan 07 03:46:48 crc kubenswrapper[4980]: I0107 03:46:48.799288 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-qvprq" Jan 07 03:46:48 crc kubenswrapper[4980]: I0107 03:46:48.808252 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k"] Jan 07 03:46:48 crc kubenswrapper[4980]: I0107 03:46:48.862499 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d43d97b-d62f-4e1e-b672-875c0dccca4e-util\") pod \"88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k\" (UID: \"1d43d97b-d62f-4e1e-b672-875c0dccca4e\") " pod="openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k" Jan 07 03:46:48 crc kubenswrapper[4980]: I0107 03:46:48.863044 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d43d97b-d62f-4e1e-b672-875c0dccca4e-bundle\") pod \"88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k\" (UID: \"1d43d97b-d62f-4e1e-b672-875c0dccca4e\") " pod="openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k" Jan 07 03:46:48 crc kubenswrapper[4980]: I0107 03:46:48.863215 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgp9w\" (UniqueName: \"kubernetes.io/projected/1d43d97b-d62f-4e1e-b672-875c0dccca4e-kube-api-access-dgp9w\") pod \"88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k\" (UID: \"1d43d97b-d62f-4e1e-b672-875c0dccca4e\") " pod="openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k" Jan 07 03:46:48 crc kubenswrapper[4980]: I0107 03:46:48.964498 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgp9w\" (UniqueName: \"kubernetes.io/projected/1d43d97b-d62f-4e1e-b672-875c0dccca4e-kube-api-access-dgp9w\") pod \"88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k\" (UID: \"1d43d97b-d62f-4e1e-b672-875c0dccca4e\") " pod="openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k" Jan 07 03:46:48 crc kubenswrapper[4980]: I0107 03:46:48.964623 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d43d97b-d62f-4e1e-b672-875c0dccca4e-util\") pod \"88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k\" (UID: \"1d43d97b-d62f-4e1e-b672-875c0dccca4e\") " pod="openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k" Jan 07 03:46:48 crc kubenswrapper[4980]: I0107 03:46:48.964655 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d43d97b-d62f-4e1e-b672-875c0dccca4e-bundle\") pod \"88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k\" (UID: \"1d43d97b-d62f-4e1e-b672-875c0dccca4e\") " pod="openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k" Jan 07 03:46:48 crc kubenswrapper[4980]: I0107 03:46:48.965217 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d43d97b-d62f-4e1e-b672-875c0dccca4e-bundle\") pod \"88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k\" (UID: \"1d43d97b-d62f-4e1e-b672-875c0dccca4e\") " pod="openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k" Jan 07 03:46:48 crc kubenswrapper[4980]: I0107 03:46:48.965344 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d43d97b-d62f-4e1e-b672-875c0dccca4e-util\") pod \"88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k\" (UID: \"1d43d97b-d62f-4e1e-b672-875c0dccca4e\") " pod="openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k" Jan 07 03:46:48 crc kubenswrapper[4980]: I0107 03:46:48.991353 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgp9w\" (UniqueName: \"kubernetes.io/projected/1d43d97b-d62f-4e1e-b672-875c0dccca4e-kube-api-access-dgp9w\") pod \"88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k\" (UID: \"1d43d97b-d62f-4e1e-b672-875c0dccca4e\") " pod="openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k" Jan 07 03:46:49 crc kubenswrapper[4980]: I0107 03:46:49.191350 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k" Jan 07 03:46:49 crc kubenswrapper[4980]: I0107 03:46:49.519373 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k"] Jan 07 03:46:49 crc kubenswrapper[4980]: W0107 03:46:49.531533 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d43d97b_d62f_4e1e_b672_875c0dccca4e.slice/crio-55a7e8a9027c6547d0f45e8d6c7b59111f2291406b55cd9220f0a9d1e37ab231 WatchSource:0}: Error finding container 55a7e8a9027c6547d0f45e8d6c7b59111f2291406b55cd9220f0a9d1e37ab231: Status 404 returned error can't find the container with id 55a7e8a9027c6547d0f45e8d6c7b59111f2291406b55cd9220f0a9d1e37ab231 Jan 07 03:46:50 crc kubenswrapper[4980]: I0107 03:46:50.278296 4980 generic.go:334] "Generic (PLEG): container finished" podID="1d43d97b-d62f-4e1e-b672-875c0dccca4e" containerID="0b02a0adf8903f4e6b1aadc210c150a9cf0a9c9bce74c178cd9ca29a844c369b" exitCode=0 Jan 07 03:46:50 crc kubenswrapper[4980]: I0107 03:46:50.278395 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k" event={"ID":"1d43d97b-d62f-4e1e-b672-875c0dccca4e","Type":"ContainerDied","Data":"0b02a0adf8903f4e6b1aadc210c150a9cf0a9c9bce74c178cd9ca29a844c369b"} Jan 07 03:46:50 crc kubenswrapper[4980]: I0107 03:46:50.278473 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k" event={"ID":"1d43d97b-d62f-4e1e-b672-875c0dccca4e","Type":"ContainerStarted","Data":"55a7e8a9027c6547d0f45e8d6c7b59111f2291406b55cd9220f0a9d1e37ab231"} Jan 07 03:46:52 crc kubenswrapper[4980]: I0107 03:46:52.297421 4980 generic.go:334] "Generic (PLEG): container finished" podID="1d43d97b-d62f-4e1e-b672-875c0dccca4e" containerID="b00d33300c34becc94e29ced587d6fa1a772beb944052cb6c048a2703fd3e341" exitCode=0 Jan 07 03:46:52 crc kubenswrapper[4980]: I0107 03:46:52.297514 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k" event={"ID":"1d43d97b-d62f-4e1e-b672-875c0dccca4e","Type":"ContainerDied","Data":"b00d33300c34becc94e29ced587d6fa1a772beb944052cb6c048a2703fd3e341"} Jan 07 03:46:53 crc kubenswrapper[4980]: I0107 03:46:53.311548 4980 generic.go:334] "Generic (PLEG): container finished" podID="1d43d97b-d62f-4e1e-b672-875c0dccca4e" containerID="78721cbfe92a366b04d7d62608461a356eb43e5d93ffc675d49840ecf063ac9b" exitCode=0 Jan 07 03:46:53 crc kubenswrapper[4980]: I0107 03:46:53.311657 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k" event={"ID":"1d43d97b-d62f-4e1e-b672-875c0dccca4e","Type":"ContainerDied","Data":"78721cbfe92a366b04d7d62608461a356eb43e5d93ffc675d49840ecf063ac9b"} Jan 07 03:46:54 crc kubenswrapper[4980]: I0107 03:46:54.688081 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k" Jan 07 03:46:54 crc kubenswrapper[4980]: I0107 03:46:54.754471 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d43d97b-d62f-4e1e-b672-875c0dccca4e-bundle\") pod \"1d43d97b-d62f-4e1e-b672-875c0dccca4e\" (UID: \"1d43d97b-d62f-4e1e-b672-875c0dccca4e\") " Jan 07 03:46:54 crc kubenswrapper[4980]: I0107 03:46:54.754764 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d43d97b-d62f-4e1e-b672-875c0dccca4e-util\") pod \"1d43d97b-d62f-4e1e-b672-875c0dccca4e\" (UID: \"1d43d97b-d62f-4e1e-b672-875c0dccca4e\") " Jan 07 03:46:54 crc kubenswrapper[4980]: I0107 03:46:54.755024 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgp9w\" (UniqueName: \"kubernetes.io/projected/1d43d97b-d62f-4e1e-b672-875c0dccca4e-kube-api-access-dgp9w\") pod \"1d43d97b-d62f-4e1e-b672-875c0dccca4e\" (UID: \"1d43d97b-d62f-4e1e-b672-875c0dccca4e\") " Jan 07 03:46:54 crc kubenswrapper[4980]: I0107 03:46:54.755819 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d43d97b-d62f-4e1e-b672-875c0dccca4e-bundle" (OuterVolumeSpecName: "bundle") pod "1d43d97b-d62f-4e1e-b672-875c0dccca4e" (UID: "1d43d97b-d62f-4e1e-b672-875c0dccca4e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:46:54 crc kubenswrapper[4980]: I0107 03:46:54.756259 4980 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d43d97b-d62f-4e1e-b672-875c0dccca4e-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:46:54 crc kubenswrapper[4980]: I0107 03:46:54.763712 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d43d97b-d62f-4e1e-b672-875c0dccca4e-kube-api-access-dgp9w" (OuterVolumeSpecName: "kube-api-access-dgp9w") pod "1d43d97b-d62f-4e1e-b672-875c0dccca4e" (UID: "1d43d97b-d62f-4e1e-b672-875c0dccca4e"). InnerVolumeSpecName "kube-api-access-dgp9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:46:54 crc kubenswrapper[4980]: I0107 03:46:54.785341 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d43d97b-d62f-4e1e-b672-875c0dccca4e-util" (OuterVolumeSpecName: "util") pod "1d43d97b-d62f-4e1e-b672-875c0dccca4e" (UID: "1d43d97b-d62f-4e1e-b672-875c0dccca4e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:46:54 crc kubenswrapper[4980]: I0107 03:46:54.857335 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgp9w\" (UniqueName: \"kubernetes.io/projected/1d43d97b-d62f-4e1e-b672-875c0dccca4e-kube-api-access-dgp9w\") on node \"crc\" DevicePath \"\"" Jan 07 03:46:54 crc kubenswrapper[4980]: I0107 03:46:54.857392 4980 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d43d97b-d62f-4e1e-b672-875c0dccca4e-util\") on node \"crc\" DevicePath \"\"" Jan 07 03:46:55 crc kubenswrapper[4980]: I0107 03:46:55.332431 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k" event={"ID":"1d43d97b-d62f-4e1e-b672-875c0dccca4e","Type":"ContainerDied","Data":"55a7e8a9027c6547d0f45e8d6c7b59111f2291406b55cd9220f0a9d1e37ab231"} Jan 07 03:46:55 crc kubenswrapper[4980]: I0107 03:46:55.332903 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55a7e8a9027c6547d0f45e8d6c7b59111f2291406b55cd9220f0a9d1e37ab231" Jan 07 03:46:55 crc kubenswrapper[4980]: I0107 03:46:55.332835 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k" Jan 07 03:47:01 crc kubenswrapper[4980]: I0107 03:47:01.651198 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-54bc58988c-zrwgp"] Jan 07 03:47:01 crc kubenswrapper[4980]: E0107 03:47:01.652147 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d43d97b-d62f-4e1e-b672-875c0dccca4e" containerName="util" Jan 07 03:47:01 crc kubenswrapper[4980]: I0107 03:47:01.652167 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d43d97b-d62f-4e1e-b672-875c0dccca4e" containerName="util" Jan 07 03:47:01 crc kubenswrapper[4980]: E0107 03:47:01.652191 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d43d97b-d62f-4e1e-b672-875c0dccca4e" containerName="pull" Jan 07 03:47:01 crc kubenswrapper[4980]: I0107 03:47:01.652204 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d43d97b-d62f-4e1e-b672-875c0dccca4e" containerName="pull" Jan 07 03:47:01 crc kubenswrapper[4980]: E0107 03:47:01.652227 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d43d97b-d62f-4e1e-b672-875c0dccca4e" containerName="extract" Jan 07 03:47:01 crc kubenswrapper[4980]: I0107 03:47:01.652241 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d43d97b-d62f-4e1e-b672-875c0dccca4e" containerName="extract" Jan 07 03:47:01 crc kubenswrapper[4980]: I0107 03:47:01.652431 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d43d97b-d62f-4e1e-b672-875c0dccca4e" containerName="extract" Jan 07 03:47:01 crc kubenswrapper[4980]: I0107 03:47:01.653120 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-54bc58988c-zrwgp" Jan 07 03:47:01 crc kubenswrapper[4980]: I0107 03:47:01.654966 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-jmggz" Jan 07 03:47:01 crc kubenswrapper[4980]: I0107 03:47:01.661218 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mqm7\" (UniqueName: \"kubernetes.io/projected/4af3dbe2-f463-48b6-9264-9d8ad4970648-kube-api-access-2mqm7\") pod \"openstack-operator-controller-operator-54bc58988c-zrwgp\" (UID: \"4af3dbe2-f463-48b6-9264-9d8ad4970648\") " pod="openstack-operators/openstack-operator-controller-operator-54bc58988c-zrwgp" Jan 07 03:47:01 crc kubenswrapper[4980]: I0107 03:47:01.679353 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-54bc58988c-zrwgp"] Jan 07 03:47:01 crc kubenswrapper[4980]: I0107 03:47:01.762151 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mqm7\" (UniqueName: \"kubernetes.io/projected/4af3dbe2-f463-48b6-9264-9d8ad4970648-kube-api-access-2mqm7\") pod \"openstack-operator-controller-operator-54bc58988c-zrwgp\" (UID: \"4af3dbe2-f463-48b6-9264-9d8ad4970648\") " pod="openstack-operators/openstack-operator-controller-operator-54bc58988c-zrwgp" Jan 07 03:47:01 crc kubenswrapper[4980]: I0107 03:47:01.786823 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mqm7\" (UniqueName: \"kubernetes.io/projected/4af3dbe2-f463-48b6-9264-9d8ad4970648-kube-api-access-2mqm7\") pod \"openstack-operator-controller-operator-54bc58988c-zrwgp\" (UID: \"4af3dbe2-f463-48b6-9264-9d8ad4970648\") " pod="openstack-operators/openstack-operator-controller-operator-54bc58988c-zrwgp" Jan 07 03:47:01 crc kubenswrapper[4980]: I0107 03:47:01.978153 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-54bc58988c-zrwgp" Jan 07 03:47:02 crc kubenswrapper[4980]: I0107 03:47:02.508928 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-54bc58988c-zrwgp"] Jan 07 03:47:03 crc kubenswrapper[4980]: I0107 03:47:03.386033 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-54bc58988c-zrwgp" event={"ID":"4af3dbe2-f463-48b6-9264-9d8ad4970648","Type":"ContainerStarted","Data":"2b7f6d9a373884c179385417be191ab9cf799fa38e595952e33ba7d01e807280"} Jan 07 03:47:06 crc kubenswrapper[4980]: I0107 03:47:06.542677 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:47:06 crc kubenswrapper[4980]: I0107 03:47:06.542992 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:47:07 crc kubenswrapper[4980]: I0107 03:47:07.421402 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-54bc58988c-zrwgp" event={"ID":"4af3dbe2-f463-48b6-9264-9d8ad4970648","Type":"ContainerStarted","Data":"89abf586cd6108f9c565eb880fde80edaea83a64db60201a9b9f9be51aa1c5f2"} Jan 07 03:47:07 crc kubenswrapper[4980]: I0107 03:47:07.421904 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-54bc58988c-zrwgp" Jan 07 03:47:07 crc kubenswrapper[4980]: I0107 03:47:07.467605 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-54bc58988c-zrwgp" podStartSLOduration=2.343076291 podStartE2EDuration="6.46758776s" podCreationTimestamp="2026-01-07 03:47:01 +0000 UTC" firstStartedPulling="2026-01-07 03:47:02.524416079 +0000 UTC m=+869.090110814" lastFinishedPulling="2026-01-07 03:47:06.648927548 +0000 UTC m=+873.214622283" observedRunningTime="2026-01-07 03:47:07.462202824 +0000 UTC m=+874.027897589" watchObservedRunningTime="2026-01-07 03:47:07.46758776 +0000 UTC m=+874.033282505" Jan 07 03:47:11 crc kubenswrapper[4980]: I0107 03:47:11.982137 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-54bc58988c-zrwgp" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.785065 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-8rjjc"] Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.787390 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-8rjjc" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.790458 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-469q4" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.794530 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-mqsl2"] Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.795728 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mqsl2" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.800834 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-jdkmj" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.804011 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-qj7hn"] Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.805290 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qj7hn" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.806759 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-k7h9d" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.810804 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-8rjjc"] Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.815824 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-mqsl2"] Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.820541 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-gnrv9"] Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.821628 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-gnrv9" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.824328 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-vghwj"] Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.825167 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vghwj" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.829701 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-pf9wn" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.835390 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf98p\" (UniqueName: \"kubernetes.io/projected/93a3e6e3-bd9b-4883-923c-6d58ae83000d-kube-api-access-xf98p\") pod \"heat-operator-controller-manager-658dd65b86-vghwj\" (UID: \"93a3e6e3-bd9b-4883-923c-6d58ae83000d\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vghwj" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.837054 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-mzqg9" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.841191 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-vghwj"] Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.844149 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-qj7hn"] Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.847821 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-gnrv9"] Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.883643 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-r9bm7"] Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.905706 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-r9bm7" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.908274 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-9lgnz" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.938126 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvzfh\" (UniqueName: \"kubernetes.io/projected/81f997a4-1aea-45d5-bd2f-8e6d1e8fdc61-kube-api-access-kvzfh\") pod \"cinder-operator-controller-manager-78979fc445-qj7hn\" (UID: \"81f997a4-1aea-45d5-bd2f-8e6d1e8fdc61\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qj7hn" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.938408 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf98p\" (UniqueName: \"kubernetes.io/projected/93a3e6e3-bd9b-4883-923c-6d58ae83000d-kube-api-access-xf98p\") pod \"heat-operator-controller-manager-658dd65b86-vghwj\" (UID: \"93a3e6e3-bd9b-4883-923c-6d58ae83000d\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vghwj" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.938546 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g65z2\" (UniqueName: \"kubernetes.io/projected/2e8333e2-a664-4f9a-8ddb-07e31ddc3020-kube-api-access-g65z2\") pod \"barbican-operator-controller-manager-f6f74d6db-mqsl2\" (UID: \"2e8333e2-a664-4f9a-8ddb-07e31ddc3020\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mqsl2" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.938718 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcj77\" (UniqueName: \"kubernetes.io/projected/cc01f5c0-320a-4645-bb96-5bd8b6490e08-kube-api-access-jcj77\") pod \"designate-operator-controller-manager-66f8b87655-8rjjc\" (UID: \"cc01f5c0-320a-4645-bb96-5bd8b6490e08\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-8rjjc" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.946695 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkcbj\" (UniqueName: \"kubernetes.io/projected/920991d2-089f-4864-8237-9684c6282a04-kube-api-access-gkcbj\") pod \"glance-operator-controller-manager-7b549fc966-gnrv9\" (UID: \"920991d2-089f-4864-8237-9684c6282a04\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-gnrv9" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.948616 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9"] Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.962757 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-r9bm7"] Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.962857 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.964706 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.965175 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-s6jd7" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.980624 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-s5s7g"] Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.981532 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-s5s7g" Jan 07 03:47:32 crc kubenswrapper[4980]: I0107 03:47:32.987892 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-6lp4b" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:32.996971 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf98p\" (UniqueName: \"kubernetes.io/projected/93a3e6e3-bd9b-4883-923c-6d58ae83000d-kube-api-access-xf98p\") pod \"heat-operator-controller-manager-658dd65b86-vghwj\" (UID: \"93a3e6e3-bd9b-4883-923c-6d58ae83000d\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vghwj" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.002732 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.017521 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-ndxck"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.018417 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-ndxck" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.034620 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-s5s7g"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.035975 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-kfr9j" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.047977 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhb95\" (UniqueName: \"kubernetes.io/projected/509933ce-8dca-4f14-bdc4-a5f1608954b3-kube-api-access-bhb95\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-r9bm7\" (UID: \"509933ce-8dca-4f14-bdc4-a5f1608954b3\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-r9bm7" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.048014 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwfll\" (UniqueName: \"kubernetes.io/projected/d8d586b5-b752-4122-99af-ba4ce3bbad29-kube-api-access-jwfll\") pod \"keystone-operator-controller-manager-568985c78-ndxck\" (UID: \"d8d586b5-b752-4122-99af-ba4ce3bbad29\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-ndxck" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.048044 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g65z2\" (UniqueName: \"kubernetes.io/projected/2e8333e2-a664-4f9a-8ddb-07e31ddc3020-kube-api-access-g65z2\") pod \"barbican-operator-controller-manager-f6f74d6db-mqsl2\" (UID: \"2e8333e2-a664-4f9a-8ddb-07e31ddc3020\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mqsl2" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.048066 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-882dk\" (UniqueName: \"kubernetes.io/projected/0b63f351-f7ac-44a4-8a65-a6357043af12-kube-api-access-882dk\") pod \"ironic-operator-controller-manager-f99f54bc8-s5s7g\" (UID: \"0b63f351-f7ac-44a4-8a65-a6357043af12\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-s5s7g" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.048084 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcj77\" (UniqueName: \"kubernetes.io/projected/cc01f5c0-320a-4645-bb96-5bd8b6490e08-kube-api-access-jcj77\") pod \"designate-operator-controller-manager-66f8b87655-8rjjc\" (UID: \"cc01f5c0-320a-4645-bb96-5bd8b6490e08\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-8rjjc" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.048105 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cb9j\" (UniqueName: \"kubernetes.io/projected/4223a956-7692-4bcc-8193-02312792b1f9-kube-api-access-8cb9j\") pod \"infra-operator-controller-manager-6d99759cf-c5hk9\" (UID: \"4223a956-7692-4bcc-8193-02312792b1f9\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.048137 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkcbj\" (UniqueName: \"kubernetes.io/projected/920991d2-089f-4864-8237-9684c6282a04-kube-api-access-gkcbj\") pod \"glance-operator-controller-manager-7b549fc966-gnrv9\" (UID: \"920991d2-089f-4864-8237-9684c6282a04\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-gnrv9" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.048175 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvzfh\" (UniqueName: \"kubernetes.io/projected/81f997a4-1aea-45d5-bd2f-8e6d1e8fdc61-kube-api-access-kvzfh\") pod \"cinder-operator-controller-manager-78979fc445-qj7hn\" (UID: \"81f997a4-1aea-45d5-bd2f-8e6d1e8fdc61\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qj7hn" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.048191 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert\") pod \"infra-operator-controller-manager-6d99759cf-c5hk9\" (UID: \"4223a956-7692-4bcc-8193-02312792b1f9\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.070633 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-5jzx4"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.071501 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-ndxck"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.071597 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-5jzx4" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.074970 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-n99tv" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.084605 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cc9rk"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.085505 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cc9rk" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.091828 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-rr9bf" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.095049 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvzfh\" (UniqueName: \"kubernetes.io/projected/81f997a4-1aea-45d5-bd2f-8e6d1e8fdc61-kube-api-access-kvzfh\") pod \"cinder-operator-controller-manager-78979fc445-qj7hn\" (UID: \"81f997a4-1aea-45d5-bd2f-8e6d1e8fdc61\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qj7hn" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.097211 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-5jzx4"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.107868 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g65z2\" (UniqueName: \"kubernetes.io/projected/2e8333e2-a664-4f9a-8ddb-07e31ddc3020-kube-api-access-g65z2\") pod \"barbican-operator-controller-manager-f6f74d6db-mqsl2\" (UID: \"2e8333e2-a664-4f9a-8ddb-07e31ddc3020\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mqsl2" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.109214 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkcbj\" (UniqueName: \"kubernetes.io/projected/920991d2-089f-4864-8237-9684c6282a04-kube-api-access-gkcbj\") pod \"glance-operator-controller-manager-7b549fc966-gnrv9\" (UID: \"920991d2-089f-4864-8237-9684c6282a04\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-gnrv9" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.111424 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcj77\" (UniqueName: \"kubernetes.io/projected/cc01f5c0-320a-4645-bb96-5bd8b6490e08-kube-api-access-jcj77\") pod \"designate-operator-controller-manager-66f8b87655-8rjjc\" (UID: \"cc01f5c0-320a-4645-bb96-5bd8b6490e08\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-8rjjc" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.118170 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-kpmrn"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.119546 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-kpmrn" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.120404 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cc9rk"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.128611 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mqsl2" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.128846 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-pdzqq" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.131821 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-55fcj"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.132896 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-55fcj" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.133518 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qj7hn" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.136226 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-w4z9d" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.138098 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-fd8dn"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.139030 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-fd8dn" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.140902 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-rnnmz" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.142614 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-kpmrn"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.156478 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-55fcj"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.156855 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vghwj" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.157904 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-882dk\" (UniqueName: \"kubernetes.io/projected/0b63f351-f7ac-44a4-8a65-a6357043af12-kube-api-access-882dk\") pod \"ironic-operator-controller-manager-f99f54bc8-s5s7g\" (UID: \"0b63f351-f7ac-44a4-8a65-a6357043af12\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-s5s7g" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.157932 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cb9j\" (UniqueName: \"kubernetes.io/projected/4223a956-7692-4bcc-8193-02312792b1f9-kube-api-access-8cb9j\") pod \"infra-operator-controller-manager-6d99759cf-c5hk9\" (UID: \"4223a956-7692-4bcc-8193-02312792b1f9\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.158011 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert\") pod \"infra-operator-controller-manager-6d99759cf-c5hk9\" (UID: \"4223a956-7692-4bcc-8193-02312792b1f9\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.158060 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhb95\" (UniqueName: \"kubernetes.io/projected/509933ce-8dca-4f14-bdc4-a5f1608954b3-kube-api-access-bhb95\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-r9bm7\" (UID: \"509933ce-8dca-4f14-bdc4-a5f1608954b3\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-r9bm7" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.158078 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwfll\" (UniqueName: \"kubernetes.io/projected/d8d586b5-b752-4122-99af-ba4ce3bbad29-kube-api-access-jwfll\") pod \"keystone-operator-controller-manager-568985c78-ndxck\" (UID: \"d8d586b5-b752-4122-99af-ba4ce3bbad29\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-ndxck" Jan 07 03:47:33 crc kubenswrapper[4980]: E0107 03:47:33.159067 4980 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 07 03:47:33 crc kubenswrapper[4980]: E0107 03:47:33.159193 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert podName:4223a956-7692-4bcc-8193-02312792b1f9 nodeName:}" failed. No retries permitted until 2026-01-07 03:47:33.659171708 +0000 UTC m=+900.224866443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert") pod "infra-operator-controller-manager-6d99759cf-c5hk9" (UID: "4223a956-7692-4bcc-8193-02312792b1f9") : secret "infra-operator-webhook-server-cert" not found Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.168036 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-gnrv9" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.174704 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-fd8dn"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.184769 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.185636 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.193888 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-hxkhl" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.202838 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.209306 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhb95\" (UniqueName: \"kubernetes.io/projected/509933ce-8dca-4f14-bdc4-a5f1608954b3-kube-api-access-bhb95\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-r9bm7\" (UID: \"509933ce-8dca-4f14-bdc4-a5f1608954b3\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-r9bm7" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.209994 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwfll\" (UniqueName: \"kubernetes.io/projected/d8d586b5-b752-4122-99af-ba4ce3bbad29-kube-api-access-jwfll\") pod \"keystone-operator-controller-manager-568985c78-ndxck\" (UID: \"d8d586b5-b752-4122-99af-ba4ce3bbad29\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-ndxck" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.219629 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-fdnhb"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.220539 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-fdnhb" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.225185 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-882dk\" (UniqueName: \"kubernetes.io/projected/0b63f351-f7ac-44a4-8a65-a6357043af12-kube-api-access-882dk\") pod \"ironic-operator-controller-manager-f99f54bc8-s5s7g\" (UID: \"0b63f351-f7ac-44a4-8a65-a6357043af12\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-s5s7g" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.227794 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-8r82z" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.237535 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cb9j\" (UniqueName: \"kubernetes.io/projected/4223a956-7692-4bcc-8193-02312792b1f9-kube-api-access-8cb9j\") pod \"infra-operator-controller-manager-6d99759cf-c5hk9\" (UID: \"4223a956-7692-4bcc-8193-02312792b1f9\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.237690 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-d25kc"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.238511 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-d25kc" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.239093 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-r9bm7" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.242386 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-b4dq4"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.244117 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-b4dq4" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.248121 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-fw2fd" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.248288 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-s5vz7" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.260411 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.262177 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrr98\" (UniqueName: \"kubernetes.io/projected/82ed1518-12d9-412b-86cc-03fbb1f74bd6-kube-api-access-xrr98\") pod \"neutron-operator-controller-manager-7cd87b778f-kpmrn\" (UID: \"82ed1518-12d9-412b-86cc-03fbb1f74bd6\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-kpmrn" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.262349 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xf2c\" (UniqueName: \"kubernetes.io/projected/58abb189-9361-4eac-8663-55e110e21383-kube-api-access-9xf2c\") pod \"placement-operator-controller-manager-9b6f8f78c-d25kc\" (UID: \"58abb189-9361-4eac-8663-55e110e21383\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-d25kc" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.262465 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vdxj\" (UniqueName: \"kubernetes.io/projected/86933336-6f6c-4327-bcde-a4d1a6caba77-kube-api-access-2vdxj\") pod \"manila-operator-controller-manager-598945d5b8-5jzx4\" (UID: \"86933336-6f6c-4327-bcde-a4d1a6caba77\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-5jzx4" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.262623 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72mjfc\" (UID: \"b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.262741 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxzpx\" (UniqueName: \"kubernetes.io/projected/881d9164-37f7-48da-b203-a2e5db8e2d23-kube-api-access-zxzpx\") pod \"octavia-operator-controller-manager-68c649d9d-fd8dn\" (UID: \"881d9164-37f7-48da-b203-a2e5db8e2d23\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-fd8dn" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.262848 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvdkv\" (UniqueName: \"kubernetes.io/projected/96049d0d-7c90-4cab-a18c-5fbd4e9f8373-kube-api-access-gvdkv\") pod \"nova-operator-controller-manager-5fbbf8b6cc-55fcj\" (UID: \"96049d0d-7c90-4cab-a18c-5fbd4e9f8373\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-55fcj" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.262952 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7nx6\" (UniqueName: \"kubernetes.io/projected/edf44de7-04e1-435c-a943-c47873d4e364-kube-api-access-k7nx6\") pod \"mariadb-operator-controller-manager-7b88bfc995-cc9rk\" (UID: \"edf44de7-04e1-435c-a943-c47873d4e364\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cc9rk" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.263100 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwkh2\" (UniqueName: \"kubernetes.io/projected/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-kube-api-access-vwkh2\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72mjfc\" (UID: \"b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.264884 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-d25kc"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.277603 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-b4dq4"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.282687 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-d7p5d"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.283483 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d7p5d" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.289247 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-mpl77" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.298437 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-fdnhb"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.306613 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-d7p5d"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.366069 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-s5s7g" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.368047 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxzpx\" (UniqueName: \"kubernetes.io/projected/881d9164-37f7-48da-b203-a2e5db8e2d23-kube-api-access-zxzpx\") pod \"octavia-operator-controller-manager-68c649d9d-fd8dn\" (UID: \"881d9164-37f7-48da-b203-a2e5db8e2d23\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-fd8dn" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.369191 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwwqr\" (UniqueName: \"kubernetes.io/projected/e3f2e1ae-fa58-4090-909d-4efdacb15545-kube-api-access-jwwqr\") pod \"swift-operator-controller-manager-bb586bbf4-b4dq4\" (UID: \"e3f2e1ae-fa58-4090-909d-4efdacb15545\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-b4dq4" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.369241 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvdkv\" (UniqueName: \"kubernetes.io/projected/96049d0d-7c90-4cab-a18c-5fbd4e9f8373-kube-api-access-gvdkv\") pod \"nova-operator-controller-manager-5fbbf8b6cc-55fcj\" (UID: \"96049d0d-7c90-4cab-a18c-5fbd4e9f8373\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-55fcj" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.369286 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7nx6\" (UniqueName: \"kubernetes.io/projected/edf44de7-04e1-435c-a943-c47873d4e364-kube-api-access-k7nx6\") pod \"mariadb-operator-controller-manager-7b88bfc995-cc9rk\" (UID: \"edf44de7-04e1-435c-a943-c47873d4e364\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cc9rk" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.369326 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmr7x\" (UniqueName: \"kubernetes.io/projected/c86a562f-bdd6-4463-8edc-6ce72f41af16-kube-api-access-pmr7x\") pod \"ovn-operator-controller-manager-bf6d4f946-fdnhb\" (UID: \"c86a562f-bdd6-4463-8edc-6ce72f41af16\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-fdnhb" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.369399 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwkh2\" (UniqueName: \"kubernetes.io/projected/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-kube-api-access-vwkh2\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72mjfc\" (UID: \"b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.369435 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrr98\" (UniqueName: \"kubernetes.io/projected/82ed1518-12d9-412b-86cc-03fbb1f74bd6-kube-api-access-xrr98\") pod \"neutron-operator-controller-manager-7cd87b778f-kpmrn\" (UID: \"82ed1518-12d9-412b-86cc-03fbb1f74bd6\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-kpmrn" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.369459 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xf2c\" (UniqueName: \"kubernetes.io/projected/58abb189-9361-4eac-8663-55e110e21383-kube-api-access-9xf2c\") pod \"placement-operator-controller-manager-9b6f8f78c-d25kc\" (UID: \"58abb189-9361-4eac-8663-55e110e21383\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-d25kc" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.369491 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vdxj\" (UniqueName: \"kubernetes.io/projected/86933336-6f6c-4327-bcde-a4d1a6caba77-kube-api-access-2vdxj\") pod \"manila-operator-controller-manager-598945d5b8-5jzx4\" (UID: \"86933336-6f6c-4327-bcde-a4d1a6caba77\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-5jzx4" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.369535 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72mjfc\" (UID: \"b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" Jan 07 03:47:33 crc kubenswrapper[4980]: E0107 03:47:33.369693 4980 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 07 03:47:33 crc kubenswrapper[4980]: E0107 03:47:33.369743 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert podName:b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c nodeName:}" failed. No retries permitted until 2026-01-07 03:47:33.869728496 +0000 UTC m=+900.435423231 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" (UID: "b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.374001 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-ndxck" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.398469 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxzpx\" (UniqueName: \"kubernetes.io/projected/881d9164-37f7-48da-b203-a2e5db8e2d23-kube-api-access-zxzpx\") pod \"octavia-operator-controller-manager-68c649d9d-fd8dn\" (UID: \"881d9164-37f7-48da-b203-a2e5db8e2d23\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-fd8dn" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.406492 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-8rjjc" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.414008 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xf2c\" (UniqueName: \"kubernetes.io/projected/58abb189-9361-4eac-8663-55e110e21383-kube-api-access-9xf2c\") pod \"placement-operator-controller-manager-9b6f8f78c-d25kc\" (UID: \"58abb189-9361-4eac-8663-55e110e21383\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-d25kc" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.419912 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrr98\" (UniqueName: \"kubernetes.io/projected/82ed1518-12d9-412b-86cc-03fbb1f74bd6-kube-api-access-xrr98\") pod \"neutron-operator-controller-manager-7cd87b778f-kpmrn\" (UID: \"82ed1518-12d9-412b-86cc-03fbb1f74bd6\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-kpmrn" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.422746 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwkh2\" (UniqueName: \"kubernetes.io/projected/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-kube-api-access-vwkh2\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72mjfc\" (UID: \"b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.436673 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvdkv\" (UniqueName: \"kubernetes.io/projected/96049d0d-7c90-4cab-a18c-5fbd4e9f8373-kube-api-access-gvdkv\") pod \"nova-operator-controller-manager-5fbbf8b6cc-55fcj\" (UID: \"96049d0d-7c90-4cab-a18c-5fbd4e9f8373\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-55fcj" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.446916 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vdxj\" (UniqueName: \"kubernetes.io/projected/86933336-6f6c-4327-bcde-a4d1a6caba77-kube-api-access-2vdxj\") pod \"manila-operator-controller-manager-598945d5b8-5jzx4\" (UID: \"86933336-6f6c-4327-bcde-a4d1a6caba77\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-5jzx4" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.456229 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7nx6\" (UniqueName: \"kubernetes.io/projected/edf44de7-04e1-435c-a943-c47873d4e364-kube-api-access-k7nx6\") pod \"mariadb-operator-controller-manager-7b88bfc995-cc9rk\" (UID: \"edf44de7-04e1-435c-a943-c47873d4e364\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cc9rk" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.459136 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-8qlqr"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.460264 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-8qlqr" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.462848 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-67tft" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.478206 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qhk9\" (UniqueName: \"kubernetes.io/projected/b6565eee-ab9b-4a1a-a5a8-6036df399731-kube-api-access-2qhk9\") pod \"test-operator-controller-manager-6c866cfdcb-8qlqr\" (UID: \"b6565eee-ab9b-4a1a-a5a8-6036df399731\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-8qlqr" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.478317 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwwqr\" (UniqueName: \"kubernetes.io/projected/e3f2e1ae-fa58-4090-909d-4efdacb15545-kube-api-access-jwwqr\") pod \"swift-operator-controller-manager-bb586bbf4-b4dq4\" (UID: \"e3f2e1ae-fa58-4090-909d-4efdacb15545\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-b4dq4" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.478383 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmr7x\" (UniqueName: \"kubernetes.io/projected/c86a562f-bdd6-4463-8edc-6ce72f41af16-kube-api-access-pmr7x\") pod \"ovn-operator-controller-manager-bf6d4f946-fdnhb\" (UID: \"c86a562f-bdd6-4463-8edc-6ce72f41af16\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-fdnhb" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.478405 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2rgl\" (UniqueName: \"kubernetes.io/projected/28cf4151-f7be-4992-87f8-e34bf1d0a9c0-kube-api-access-k2rgl\") pod \"telemetry-operator-controller-manager-68d988df55-d7p5d\" (UID: \"28cf4151-f7be-4992-87f8-e34bf1d0a9c0\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d7p5d" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.488201 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-8qlqr"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.492884 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-5jzx4" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.507547 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-qlpkh"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.508956 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-qlpkh" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.523088 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-pzqj4" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.524324 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmr7x\" (UniqueName: \"kubernetes.io/projected/c86a562f-bdd6-4463-8edc-6ce72f41af16-kube-api-access-pmr7x\") pod \"ovn-operator-controller-manager-bf6d4f946-fdnhb\" (UID: \"c86a562f-bdd6-4463-8edc-6ce72f41af16\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-fdnhb" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.525041 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwwqr\" (UniqueName: \"kubernetes.io/projected/e3f2e1ae-fa58-4090-909d-4efdacb15545-kube-api-access-jwwqr\") pod \"swift-operator-controller-manager-bb586bbf4-b4dq4\" (UID: \"e3f2e1ae-fa58-4090-909d-4efdacb15545\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-b4dq4" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.528916 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-qlpkh"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.529252 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cc9rk" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.552344 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-kpmrn" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.561865 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-55fcj" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.573406 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.574525 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.579038 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.579265 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.579414 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-h6l9p" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.579708 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2rgl\" (UniqueName: \"kubernetes.io/projected/28cf4151-f7be-4992-87f8-e34bf1d0a9c0-kube-api-access-k2rgl\") pod \"telemetry-operator-controller-manager-68d988df55-d7p5d\" (UID: \"28cf4151-f7be-4992-87f8-e34bf1d0a9c0\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d7p5d" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.579789 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klnjj\" (UniqueName: \"kubernetes.io/projected/c1ae6abf-8410-4816-b2a1-b6a9f0550eb2-kube-api-access-klnjj\") pod \"watcher-operator-controller-manager-9dbdf6486-qlpkh\" (UID: \"c1ae6abf-8410-4816-b2a1-b6a9f0550eb2\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-qlpkh" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.579824 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qhk9\" (UniqueName: \"kubernetes.io/projected/b6565eee-ab9b-4a1a-a5a8-6036df399731-kube-api-access-2qhk9\") pod \"test-operator-controller-manager-6c866cfdcb-8qlqr\" (UID: \"b6565eee-ab9b-4a1a-a5a8-6036df399731\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-8qlqr" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.585844 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.585988 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-fd8dn" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.597944 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2rgl\" (UniqueName: \"kubernetes.io/projected/28cf4151-f7be-4992-87f8-e34bf1d0a9c0-kube-api-access-k2rgl\") pod \"telemetry-operator-controller-manager-68d988df55-d7p5d\" (UID: \"28cf4151-f7be-4992-87f8-e34bf1d0a9c0\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d7p5d" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.608353 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qhk9\" (UniqueName: \"kubernetes.io/projected/b6565eee-ab9b-4a1a-a5a8-6036df399731-kube-api-access-2qhk9\") pod \"test-operator-controller-manager-6c866cfdcb-8qlqr\" (UID: \"b6565eee-ab9b-4a1a-a5a8-6036df399731\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-8qlqr" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.620362 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-fdnhb" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.647124 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fxr6h"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.647446 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-d25kc" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.649504 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fxr6h" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.651987 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fxr6h"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.654118 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-scpt5" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.680466 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.680511 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klnjj\" (UniqueName: \"kubernetes.io/projected/c1ae6abf-8410-4816-b2a1-b6a9f0550eb2-kube-api-access-klnjj\") pod \"watcher-operator-controller-manager-9dbdf6486-qlpkh\" (UID: \"c1ae6abf-8410-4816-b2a1-b6a9f0550eb2\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-qlpkh" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.680538 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzrc8\" (UniqueName: \"kubernetes.io/projected/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-kube-api-access-hzrc8\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.680605 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.680633 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksmpv\" (UniqueName: \"kubernetes.io/projected/e4c6355b-ca56-47e9-897e-ed6b641d456a-kube-api-access-ksmpv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fxr6h\" (UID: \"e4c6355b-ca56-47e9-897e-ed6b641d456a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fxr6h" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.680659 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert\") pod \"infra-operator-controller-manager-6d99759cf-c5hk9\" (UID: \"4223a956-7692-4bcc-8193-02312792b1f9\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" Jan 07 03:47:33 crc kubenswrapper[4980]: E0107 03:47:33.680778 4980 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 07 03:47:33 crc kubenswrapper[4980]: E0107 03:47:33.680820 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert podName:4223a956-7692-4bcc-8193-02312792b1f9 nodeName:}" failed. No retries permitted until 2026-01-07 03:47:34.680806131 +0000 UTC m=+901.246500866 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert") pod "infra-operator-controller-manager-6d99759cf-c5hk9" (UID: "4223a956-7692-4bcc-8193-02312792b1f9") : secret "infra-operator-webhook-server-cert" not found Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.707622 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klnjj\" (UniqueName: \"kubernetes.io/projected/c1ae6abf-8410-4816-b2a1-b6a9f0550eb2-kube-api-access-klnjj\") pod \"watcher-operator-controller-manager-9dbdf6486-qlpkh\" (UID: \"c1ae6abf-8410-4816-b2a1-b6a9f0550eb2\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-qlpkh" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.732489 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-mqsl2"] Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.758049 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-fw2fd" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.765645 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-b4dq4" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.787958 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.787998 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksmpv\" (UniqueName: \"kubernetes.io/projected/e4c6355b-ca56-47e9-897e-ed6b641d456a-kube-api-access-ksmpv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fxr6h\" (UID: \"e4c6355b-ca56-47e9-897e-ed6b641d456a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fxr6h" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.788072 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.788098 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzrc8\" (UniqueName: \"kubernetes.io/projected/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-kube-api-access-hzrc8\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.793949 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 07 03:47:33 crc kubenswrapper[4980]: E0107 03:47:33.798954 4980 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 07 03:47:33 crc kubenswrapper[4980]: E0107 03:47:33.799027 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs podName:4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3 nodeName:}" failed. No retries permitted until 2026-01-07 03:47:34.299009295 +0000 UTC m=+900.864704030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs") pod "openstack-operator-controller-manager-7bbf496545-vdwhj" (UID: "4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3") : secret "webhook-server-cert" not found Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.804700 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.814671 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksmpv\" (UniqueName: \"kubernetes.io/projected/e4c6355b-ca56-47e9-897e-ed6b641d456a-kube-api-access-ksmpv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fxr6h\" (UID: \"e4c6355b-ca56-47e9-897e-ed6b641d456a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fxr6h" Jan 07 03:47:33 crc kubenswrapper[4980]: E0107 03:47:33.816920 4980 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 07 03:47:33 crc kubenswrapper[4980]: E0107 03:47:33.817385 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs podName:4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3 nodeName:}" failed. No retries permitted until 2026-01-07 03:47:34.317365392 +0000 UTC m=+900.883060127 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs") pod "openstack-operator-controller-manager-7bbf496545-vdwhj" (UID: "4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3") : secret "metrics-server-cert" not found Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.824027 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzrc8\" (UniqueName: \"kubernetes.io/projected/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-kube-api-access-hzrc8\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.835269 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fxr6h" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.841000 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-mpl77" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.849124 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d7p5d" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.889197 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72mjfc\" (UID: \"b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" Jan 07 03:47:33 crc kubenswrapper[4980]: E0107 03:47:33.889678 4980 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 07 03:47:33 crc kubenswrapper[4980]: E0107 03:47:33.889762 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert podName:b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c nodeName:}" failed. No retries permitted until 2026-01-07 03:47:34.889743419 +0000 UTC m=+901.455438154 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" (UID: "b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.910576 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-pzqj4" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.912647 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-67tft" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.916210 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-qlpkh" Jan 07 03:47:33 crc kubenswrapper[4980]: I0107 03:47:33.918172 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-8qlqr" Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.177494 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-gnrv9"] Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.226008 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-qj7hn"] Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.304000 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-vghwj"] Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.309493 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.309612 4980 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.309658 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs podName:4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3 nodeName:}" failed. No retries permitted until 2026-01-07 03:47:35.309643667 +0000 UTC m=+901.875338402 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs") pod "openstack-operator-controller-manager-7bbf496545-vdwhj" (UID: "4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3") : secret "webhook-server-cert" not found Jan 07 03:47:34 crc kubenswrapper[4980]: W0107 03:47:34.311068 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc01f5c0_320a_4645_bb96_5bd8b6490e08.slice/crio-cab0975abaaf9127ad0da41e3e26b3ef866f3169ffe5ab0342a31cb17b6b2b3e WatchSource:0}: Error finding container cab0975abaaf9127ad0da41e3e26b3ef866f3169ffe5ab0342a31cb17b6b2b3e: Status 404 returned error can't find the container with id cab0975abaaf9127ad0da41e3e26b3ef866f3169ffe5ab0342a31cb17b6b2b3e Jan 07 03:47:34 crc kubenswrapper[4980]: W0107 03:47:34.314103 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod509933ce_8dca_4f14_bdc4_a5f1608954b3.slice/crio-98b12f31c4ac639c157cd7066a664b84c778bad8e963ba9455a623e83d77ed03 WatchSource:0}: Error finding container 98b12f31c4ac639c157cd7066a664b84c778bad8e963ba9455a623e83d77ed03: Status 404 returned error can't find the container with id 98b12f31c4ac639c157cd7066a664b84c778bad8e963ba9455a623e83d77ed03 Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.322003 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-r9bm7"] Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.329578 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-8rjjc"] Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.410870 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.411036 4980 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.411115 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs podName:4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3 nodeName:}" failed. No retries permitted until 2026-01-07 03:47:35.411097413 +0000 UTC m=+901.976792138 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs") pod "openstack-operator-controller-manager-7bbf496545-vdwhj" (UID: "4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3") : secret "metrics-server-cert" not found Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.564705 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-55fcj"] Jan 07 03:47:34 crc kubenswrapper[4980]: W0107 03:47:34.579068 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86933336_6f6c_4327_bcde_a4d1a6caba77.slice/crio-c190974167177bd87306e2ce7097d7220e952fad8749e5759f0c26e49b60e3ff WatchSource:0}: Error finding container c190974167177bd87306e2ce7097d7220e952fad8749e5759f0c26e49b60e3ff: Status 404 returned error can't find the container with id c190974167177bd87306e2ce7097d7220e952fad8749e5759f0c26e49b60e3ff Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.579777 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-5jzx4"] Jan 07 03:47:34 crc kubenswrapper[4980]: W0107 03:47:34.583340 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96049d0d_7c90_4cab_a18c_5fbd4e9f8373.slice/crio-b103aae38471369de38f31e683e3757b9360ed308a13b1dd71dfc26e8009f49d WatchSource:0}: Error finding container b103aae38471369de38f31e683e3757b9360ed308a13b1dd71dfc26e8009f49d: Status 404 returned error can't find the container with id b103aae38471369de38f31e683e3757b9360ed308a13b1dd71dfc26e8009f49d Jan 07 03:47:34 crc kubenswrapper[4980]: W0107 03:47:34.587100 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod881d9164_37f7_48da_b203_a2e5db8e2d23.slice/crio-60b0c1d73d88359ca865ac46b961d068bb45e2b903a2a226b15addaa2995087b WatchSource:0}: Error finding container 60b0c1d73d88359ca865ac46b961d068bb45e2b903a2a226b15addaa2995087b: Status 404 returned error can't find the container with id 60b0c1d73d88359ca865ac46b961d068bb45e2b903a2a226b15addaa2995087b Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.589462 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-ndxck"] Jan 07 03:47:34 crc kubenswrapper[4980]: W0107 03:47:34.589964 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc86a562f_bdd6_4463_8edc_6ce72f41af16.slice/crio-43fc0dcc58633e4be058140ca456b353d65f0efd17ebdc79f29305db1c4508c4 WatchSource:0}: Error finding container 43fc0dcc58633e4be058140ca456b353d65f0efd17ebdc79f29305db1c4508c4: Status 404 returned error can't find the container with id 43fc0dcc58633e4be058140ca456b353d65f0efd17ebdc79f29305db1c4508c4 Jan 07 03:47:34 crc kubenswrapper[4980]: W0107 03:47:34.593499 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b63f351_f7ac_44a4_8a65_a6357043af12.slice/crio-4f2f73f90d531d235c1d71ea0ff72f4dbb8a3baf3a851ffb0495e16f866c7d30 WatchSource:0}: Error finding container 4f2f73f90d531d235c1d71ea0ff72f4dbb8a3baf3a851ffb0495e16f866c7d30: Status 404 returned error can't find the container with id 4f2f73f90d531d235c1d71ea0ff72f4dbb8a3baf3a851ffb0495e16f866c7d30 Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.595909 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-kpmrn"] Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.600972 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-s5s7g"] Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.605506 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-fdnhb"] Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.610067 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-fd8dn"] Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.612221 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qj7hn" event={"ID":"81f997a4-1aea-45d5-bd2f-8e6d1e8fdc61","Type":"ContainerStarted","Data":"2e5caf0cc5cf863ee3dae4c096d8ad647f504b6e46106da0719825da4b51908f"} Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.613846 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-s5s7g" event={"ID":"0b63f351-f7ac-44a4-8a65-a6357043af12","Type":"ContainerStarted","Data":"4f2f73f90d531d235c1d71ea0ff72f4dbb8a3baf3a851ffb0495e16f866c7d30"} Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.614880 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-ndxck" event={"ID":"d8d586b5-b752-4122-99af-ba4ce3bbad29","Type":"ContainerStarted","Data":"219e2c73a81d454e7c2a9227b9e23413844af87a28f7a57ee8918e2573092f03"} Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.618232 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-fd8dn" event={"ID":"881d9164-37f7-48da-b203-a2e5db8e2d23","Type":"ContainerStarted","Data":"60b0c1d73d88359ca865ac46b961d068bb45e2b903a2a226b15addaa2995087b"} Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.621564 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vghwj" event={"ID":"93a3e6e3-bd9b-4883-923c-6d58ae83000d","Type":"ContainerStarted","Data":"ce94522d2cffe82a5f044a1653d9616ed07e9a1e837a1997cba7c1e4ec9c59c6"} Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.622829 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mqsl2" event={"ID":"2e8333e2-a664-4f9a-8ddb-07e31ddc3020","Type":"ContainerStarted","Data":"69bf55358b33e526aa2f1058e2f7cb4c76c5f85a87f581c276777bd06231e5d0"} Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.623682 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-55fcj" event={"ID":"96049d0d-7c90-4cab-a18c-5fbd4e9f8373","Type":"ContainerStarted","Data":"b103aae38471369de38f31e683e3757b9360ed308a13b1dd71dfc26e8009f49d"} Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.624671 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-r9bm7" event={"ID":"509933ce-8dca-4f14-bdc4-a5f1608954b3","Type":"ContainerStarted","Data":"98b12f31c4ac639c157cd7066a664b84c778bad8e963ba9455a623e83d77ed03"} Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.625603 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-5jzx4" event={"ID":"86933336-6f6c-4327-bcde-a4d1a6caba77","Type":"ContainerStarted","Data":"c190974167177bd87306e2ce7097d7220e952fad8749e5759f0c26e49b60e3ff"} Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.626969 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-kpmrn" event={"ID":"82ed1518-12d9-412b-86cc-03fbb1f74bd6","Type":"ContainerStarted","Data":"7055ebd1c3628c74782eb8c243a3d235db8239e5ffe12587b9a56097da551d68"} Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.627863 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-gnrv9" event={"ID":"920991d2-089f-4864-8237-9684c6282a04","Type":"ContainerStarted","Data":"f46d15a3a8f4de1517e91c98c106fa1cbc3969f4bd5cee4d07d921d77b8cd271"} Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.629718 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-8rjjc" event={"ID":"cc01f5c0-320a-4645-bb96-5bd8b6490e08","Type":"ContainerStarted","Data":"cab0975abaaf9127ad0da41e3e26b3ef866f3169ffe5ab0342a31cb17b6b2b3e"} Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.631536 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-fdnhb" event={"ID":"c86a562f-bdd6-4463-8edc-6ce72f41af16","Type":"ContainerStarted","Data":"43fc0dcc58633e4be058140ca456b353d65f0efd17ebdc79f29305db1c4508c4"} Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.710768 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-d7p5d"] Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.714394 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert\") pod \"infra-operator-controller-manager-6d99759cf-c5hk9\" (UID: \"4223a956-7692-4bcc-8193-02312792b1f9\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.714517 4980 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.714573 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert podName:4223a956-7692-4bcc-8193-02312792b1f9 nodeName:}" failed. No retries permitted until 2026-01-07 03:47:36.714546672 +0000 UTC m=+903.280241407 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert") pod "infra-operator-controller-manager-6d99759cf-c5hk9" (UID: "4223a956-7692-4bcc-8193-02312792b1f9") : secret "infra-operator-webhook-server-cert" not found Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.725686 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:3c1b2858c64110448d801905fbbf3ffe7f78d264cc46ab12ab2d724842dba309,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k2rgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-68d988df55-d7p5d_openstack-operators(28cf4151-f7be-4992-87f8-e34bf1d0a9c0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.726861 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d7p5d" podUID="28cf4151-f7be-4992-87f8-e34bf1d0a9c0" Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.727397 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fxr6h"] Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.746729 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-qlpkh"] Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.747435 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ksmpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-fxr6h_openstack-operators(e4c6355b-ca56-47e9-897e-ed6b641d456a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.750041 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fxr6h" podUID="e4c6355b-ca56-47e9-897e-ed6b641d456a" Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.754104 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-klnjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-9dbdf6486-qlpkh_openstack-operators(c1ae6abf-8410-4816-b2a1-b6a9f0550eb2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.755246 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-qlpkh" podUID="c1ae6abf-8410-4816-b2a1-b6a9f0550eb2" Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.758575 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9xf2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-9b6f8f78c-d25kc_openstack-operators(58abb189-9361-4eac-8663-55e110e21383): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.759817 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-d25kc" podUID="58abb189-9361-4eac-8663-55e110e21383" Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.760329 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-d25kc"] Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.766840 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cc9rk"] Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.771139 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k7nx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-7b88bfc995-cc9rk_openstack-operators(edf44de7-04e1-435c-a943-c47873d4e364): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.772469 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cc9rk" podUID="edf44de7-04e1-435c-a943-c47873d4e364" Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.776607 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-b4dq4"] Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.783642 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-8qlqr"] Jan 07 03:47:34 crc kubenswrapper[4980]: W0107 03:47:34.786466 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3f2e1ae_fa58_4090_909d_4efdacb15545.slice/crio-719df2dfbd665fc66c22765405147b8be2a87a41a2d27ca3490a2e8d2e57989e WatchSource:0}: Error finding container 719df2dfbd665fc66c22765405147b8be2a87a41a2d27ca3490a2e8d2e57989e: Status 404 returned error can't find the container with id 719df2dfbd665fc66c22765405147b8be2a87a41a2d27ca3490a2e8d2e57989e Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.789359 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jwwqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-bb586bbf4-b4dq4_openstack-operators(e3f2e1ae-fa58-4090-909d-4efdacb15545): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.790450 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-b4dq4" podUID="e3f2e1ae-fa58-4090-909d-4efdacb15545" Jan 07 03:47:34 crc kubenswrapper[4980]: W0107 03:47:34.790882 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6565eee_ab9b_4a1a_a5a8_6036df399731.slice/crio-6f3ba7d38e584a04763f24b6df01b41dd3181318f2eadd664a9214473110a62c WatchSource:0}: Error finding container 6f3ba7d38e584a04763f24b6df01b41dd3181318f2eadd664a9214473110a62c: Status 404 returned error can't find the container with id 6f3ba7d38e584a04763f24b6df01b41dd3181318f2eadd664a9214473110a62c Jan 07 03:47:34 crc kubenswrapper[4980]: I0107 03:47:34.916621 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72mjfc\" (UID: \"b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.916844 4980 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 07 03:47:34 crc kubenswrapper[4980]: E0107 03:47:34.916931 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert podName:b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c nodeName:}" failed. No retries permitted until 2026-01-07 03:47:36.916909396 +0000 UTC m=+903.482604141 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" (UID: "b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.324160 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:47:35 crc kubenswrapper[4980]: E0107 03:47:35.324325 4980 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 07 03:47:35 crc kubenswrapper[4980]: E0107 03:47:35.324405 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs podName:4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3 nodeName:}" failed. No retries permitted until 2026-01-07 03:47:37.324387741 +0000 UTC m=+903.890082476 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs") pod "openstack-operator-controller-manager-7bbf496545-vdwhj" (UID: "4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3") : secret "webhook-server-cert" not found Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.425835 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:47:35 crc kubenswrapper[4980]: E0107 03:47:35.426002 4980 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 07 03:47:35 crc kubenswrapper[4980]: E0107 03:47:35.426074 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs podName:4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3 nodeName:}" failed. No retries permitted until 2026-01-07 03:47:37.426055203 +0000 UTC m=+903.991749938 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs") pod "openstack-operator-controller-manager-7bbf496545-vdwhj" (UID: "4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3") : secret "metrics-server-cert" not found Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.594324 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q4qsw"] Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.610229 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q4qsw"] Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.610309 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q4qsw" Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.633124 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tfpg\" (UniqueName: \"kubernetes.io/projected/147444c2-604e-45f4-8e61-2e903599d08e-kube-api-access-4tfpg\") pod \"certified-operators-q4qsw\" (UID: \"147444c2-604e-45f4-8e61-2e903599d08e\") " pod="openshift-marketplace/certified-operators-q4qsw" Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.634137 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/147444c2-604e-45f4-8e61-2e903599d08e-catalog-content\") pod \"certified-operators-q4qsw\" (UID: \"147444c2-604e-45f4-8e61-2e903599d08e\") " pod="openshift-marketplace/certified-operators-q4qsw" Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.634219 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/147444c2-604e-45f4-8e61-2e903599d08e-utilities\") pod \"certified-operators-q4qsw\" (UID: \"147444c2-604e-45f4-8e61-2e903599d08e\") " pod="openshift-marketplace/certified-operators-q4qsw" Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.638794 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fxr6h" event={"ID":"e4c6355b-ca56-47e9-897e-ed6b641d456a","Type":"ContainerStarted","Data":"ed74c433d24adbed41cb849be61ac5de492e07571bc3c4bc06661b032fd9ffe2"} Jan 07 03:47:35 crc kubenswrapper[4980]: E0107 03:47:35.642605 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fxr6h" podUID="e4c6355b-ca56-47e9-897e-ed6b641d456a" Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.645963 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-b4dq4" event={"ID":"e3f2e1ae-fa58-4090-909d-4efdacb15545","Type":"ContainerStarted","Data":"719df2dfbd665fc66c22765405147b8be2a87a41a2d27ca3490a2e8d2e57989e"} Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.648186 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cc9rk" event={"ID":"edf44de7-04e1-435c-a943-c47873d4e364","Type":"ContainerStarted","Data":"d6fee7e8ab17c41c1ebc80d155596022f25d8761351b15125eddd5deffa96582"} Jan 07 03:47:35 crc kubenswrapper[4980]: E0107 03:47:35.650139 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-b4dq4" podUID="e3f2e1ae-fa58-4090-909d-4efdacb15545" Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.652454 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d7p5d" event={"ID":"28cf4151-f7be-4992-87f8-e34bf1d0a9c0","Type":"ContainerStarted","Data":"104fe3029ee27082986431f5c7322aa182a2177d63b769b701ccd95a106ba0a6"} Jan 07 03:47:35 crc kubenswrapper[4980]: E0107 03:47:35.652677 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cc9rk" podUID="edf44de7-04e1-435c-a943-c47873d4e364" Jan 07 03:47:35 crc kubenswrapper[4980]: E0107 03:47:35.658762 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:3c1b2858c64110448d801905fbbf3ffe7f78d264cc46ab12ab2d724842dba309\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d7p5d" podUID="28cf4151-f7be-4992-87f8-e34bf1d0a9c0" Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.667019 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-qlpkh" event={"ID":"c1ae6abf-8410-4816-b2a1-b6a9f0550eb2","Type":"ContainerStarted","Data":"68ca22941d83e433b24139d22851177b8728218901f796c087b666447be646e2"} Jan 07 03:47:35 crc kubenswrapper[4980]: E0107 03:47:35.668791 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-qlpkh" podUID="c1ae6abf-8410-4816-b2a1-b6a9f0550eb2" Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.675104 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-8qlqr" event={"ID":"b6565eee-ab9b-4a1a-a5a8-6036df399731","Type":"ContainerStarted","Data":"6f3ba7d38e584a04763f24b6df01b41dd3181318f2eadd664a9214473110a62c"} Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.681752 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-d25kc" event={"ID":"58abb189-9361-4eac-8663-55e110e21383","Type":"ContainerStarted","Data":"482b185b2e6eca16ba2a18c7e148c4d8bd1308b4721821e9e7f5160bde77a8b5"} Jan 07 03:47:35 crc kubenswrapper[4980]: E0107 03:47:35.685597 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420\\\"\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-d25kc" podUID="58abb189-9361-4eac-8663-55e110e21383" Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.738156 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tfpg\" (UniqueName: \"kubernetes.io/projected/147444c2-604e-45f4-8e61-2e903599d08e-kube-api-access-4tfpg\") pod \"certified-operators-q4qsw\" (UID: \"147444c2-604e-45f4-8e61-2e903599d08e\") " pod="openshift-marketplace/certified-operators-q4qsw" Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.738407 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/147444c2-604e-45f4-8e61-2e903599d08e-catalog-content\") pod \"certified-operators-q4qsw\" (UID: \"147444c2-604e-45f4-8e61-2e903599d08e\") " pod="openshift-marketplace/certified-operators-q4qsw" Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.738446 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/147444c2-604e-45f4-8e61-2e903599d08e-utilities\") pod \"certified-operators-q4qsw\" (UID: \"147444c2-604e-45f4-8e61-2e903599d08e\") " pod="openshift-marketplace/certified-operators-q4qsw" Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.741180 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/147444c2-604e-45f4-8e61-2e903599d08e-catalog-content\") pod \"certified-operators-q4qsw\" (UID: \"147444c2-604e-45f4-8e61-2e903599d08e\") " pod="openshift-marketplace/certified-operators-q4qsw" Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.741892 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/147444c2-604e-45f4-8e61-2e903599d08e-utilities\") pod \"certified-operators-q4qsw\" (UID: \"147444c2-604e-45f4-8e61-2e903599d08e\") " pod="openshift-marketplace/certified-operators-q4qsw" Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.828147 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tfpg\" (UniqueName: \"kubernetes.io/projected/147444c2-604e-45f4-8e61-2e903599d08e-kube-api-access-4tfpg\") pod \"certified-operators-q4qsw\" (UID: \"147444c2-604e-45f4-8e61-2e903599d08e\") " pod="openshift-marketplace/certified-operators-q4qsw" Jan 07 03:47:35 crc kubenswrapper[4980]: I0107 03:47:35.945036 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q4qsw" Jan 07 03:47:36 crc kubenswrapper[4980]: I0107 03:47:36.544880 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:47:36 crc kubenswrapper[4980]: I0107 03:47:36.545198 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:47:36 crc kubenswrapper[4980]: I0107 03:47:36.547066 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q4qsw"] Jan 07 03:47:36 crc kubenswrapper[4980]: E0107 03:47:36.699921 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-b4dq4" podUID="e3f2e1ae-fa58-4090-909d-4efdacb15545" Jan 07 03:47:36 crc kubenswrapper[4980]: E0107 03:47:36.701720 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cc9rk" podUID="edf44de7-04e1-435c-a943-c47873d4e364" Jan 07 03:47:36 crc kubenswrapper[4980]: E0107 03:47:36.709848 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fxr6h" podUID="e4c6355b-ca56-47e9-897e-ed6b641d456a" Jan 07 03:47:36 crc kubenswrapper[4980]: E0107 03:47:36.709920 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-qlpkh" podUID="c1ae6abf-8410-4816-b2a1-b6a9f0550eb2" Jan 07 03:47:36 crc kubenswrapper[4980]: E0107 03:47:36.710179 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420\\\"\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-d25kc" podUID="58abb189-9361-4eac-8663-55e110e21383" Jan 07 03:47:36 crc kubenswrapper[4980]: E0107 03:47:36.710388 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:3c1b2858c64110448d801905fbbf3ffe7f78d264cc46ab12ab2d724842dba309\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d7p5d" podUID="28cf4151-f7be-4992-87f8-e34bf1d0a9c0" Jan 07 03:47:36 crc kubenswrapper[4980]: I0107 03:47:36.767629 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert\") pod \"infra-operator-controller-manager-6d99759cf-c5hk9\" (UID: \"4223a956-7692-4bcc-8193-02312792b1f9\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" Jan 07 03:47:36 crc kubenswrapper[4980]: E0107 03:47:36.768364 4980 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 07 03:47:36 crc kubenswrapper[4980]: E0107 03:47:36.768420 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert podName:4223a956-7692-4bcc-8193-02312792b1f9 nodeName:}" failed. No retries permitted until 2026-01-07 03:47:40.768405932 +0000 UTC m=+907.334100667 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert") pod "infra-operator-controller-manager-6d99759cf-c5hk9" (UID: "4223a956-7692-4bcc-8193-02312792b1f9") : secret "infra-operator-webhook-server-cert" not found Jan 07 03:47:36 crc kubenswrapper[4980]: I0107 03:47:36.969855 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72mjfc\" (UID: \"b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" Jan 07 03:47:36 crc kubenswrapper[4980]: E0107 03:47:36.970004 4980 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 07 03:47:36 crc kubenswrapper[4980]: E0107 03:47:36.970081 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert podName:b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c nodeName:}" failed. No retries permitted until 2026-01-07 03:47:40.970062864 +0000 UTC m=+907.535757599 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" (UID: "b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 07 03:47:37 crc kubenswrapper[4980]: I0107 03:47:37.375475 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:47:37 crc kubenswrapper[4980]: E0107 03:47:37.375692 4980 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 07 03:47:37 crc kubenswrapper[4980]: E0107 03:47:37.375800 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs podName:4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3 nodeName:}" failed. No retries permitted until 2026-01-07 03:47:41.375781264 +0000 UTC m=+907.941475999 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs") pod "openstack-operator-controller-manager-7bbf496545-vdwhj" (UID: "4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3") : secret "webhook-server-cert" not found Jan 07 03:47:37 crc kubenswrapper[4980]: I0107 03:47:37.476252 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:47:37 crc kubenswrapper[4980]: E0107 03:47:37.476436 4980 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 07 03:47:37 crc kubenswrapper[4980]: E0107 03:47:37.476504 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs podName:4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3 nodeName:}" failed. No retries permitted until 2026-01-07 03:47:41.476488287 +0000 UTC m=+908.042183022 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs") pod "openstack-operator-controller-manager-7bbf496545-vdwhj" (UID: "4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3") : secret "metrics-server-cert" not found Jan 07 03:47:38 crc kubenswrapper[4980]: I0107 03:47:38.716866 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q4qsw" event={"ID":"147444c2-604e-45f4-8e61-2e903599d08e","Type":"ContainerStarted","Data":"30e76656dac8662118ecc1bac7c666c06e488b4c51b1276c9ab7a2e6a405b3c3"} Jan 07 03:47:40 crc kubenswrapper[4980]: I0107 03:47:40.847689 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert\") pod \"infra-operator-controller-manager-6d99759cf-c5hk9\" (UID: \"4223a956-7692-4bcc-8193-02312792b1f9\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" Jan 07 03:47:40 crc kubenswrapper[4980]: E0107 03:47:40.847893 4980 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 07 03:47:40 crc kubenswrapper[4980]: E0107 03:47:40.848352 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert podName:4223a956-7692-4bcc-8193-02312792b1f9 nodeName:}" failed. No retries permitted until 2026-01-07 03:47:48.848310552 +0000 UTC m=+915.414005317 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert") pod "infra-operator-controller-manager-6d99759cf-c5hk9" (UID: "4223a956-7692-4bcc-8193-02312792b1f9") : secret "infra-operator-webhook-server-cert" not found Jan 07 03:47:41 crc kubenswrapper[4980]: I0107 03:47:41.051140 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72mjfc\" (UID: \"b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" Jan 07 03:47:41 crc kubenswrapper[4980]: E0107 03:47:41.051430 4980 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 07 03:47:41 crc kubenswrapper[4980]: E0107 03:47:41.051510 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert podName:b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c nodeName:}" failed. No retries permitted until 2026-01-07 03:47:49.051490262 +0000 UTC m=+915.617184997 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" (UID: "b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 07 03:47:41 crc kubenswrapper[4980]: I0107 03:47:41.461293 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:47:41 crc kubenswrapper[4980]: E0107 03:47:41.461776 4980 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 07 03:47:41 crc kubenswrapper[4980]: E0107 03:47:41.461876 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs podName:4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3 nodeName:}" failed. No retries permitted until 2026-01-07 03:47:49.461858146 +0000 UTC m=+916.027552881 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs") pod "openstack-operator-controller-manager-7bbf496545-vdwhj" (UID: "4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3") : secret "webhook-server-cert" not found Jan 07 03:47:41 crc kubenswrapper[4980]: I0107 03:47:41.563399 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:47:41 crc kubenswrapper[4980]: E0107 03:47:41.563630 4980 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 07 03:47:41 crc kubenswrapper[4980]: E0107 03:47:41.563685 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs podName:4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3 nodeName:}" failed. No retries permitted until 2026-01-07 03:47:49.563672212 +0000 UTC m=+916.129366947 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs") pod "openstack-operator-controller-manager-7bbf496545-vdwhj" (UID: "4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3") : secret "metrics-server-cert" not found Jan 07 03:47:48 crc kubenswrapper[4980]: E0107 03:47:48.724973 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:19345236c6b6bd5ae772e336fa6065c6e94c8990d1bf05d30073ddb95ffffb4d" Jan 07 03:47:48 crc kubenswrapper[4980]: E0107 03:47:48.725796 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:19345236c6b6bd5ae772e336fa6065c6e94c8990d1bf05d30073ddb95ffffb4d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gkcbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-7b549fc966-gnrv9_openstack-operators(920991d2-089f-4864-8237-9684c6282a04): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 07 03:47:48 crc kubenswrapper[4980]: E0107 03:47:48.726984 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-gnrv9" podUID="920991d2-089f-4864-8237-9684c6282a04" Jan 07 03:47:48 crc kubenswrapper[4980]: E0107 03:47:48.816104 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:19345236c6b6bd5ae772e336fa6065c6e94c8990d1bf05d30073ddb95ffffb4d\\\"\"" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-gnrv9" podUID="920991d2-089f-4864-8237-9684c6282a04" Jan 07 03:47:48 crc kubenswrapper[4980]: I0107 03:47:48.887727 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert\") pod \"infra-operator-controller-manager-6d99759cf-c5hk9\" (UID: \"4223a956-7692-4bcc-8193-02312792b1f9\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" Jan 07 03:47:48 crc kubenswrapper[4980]: E0107 03:47:48.887919 4980 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 07 03:47:48 crc kubenswrapper[4980]: E0107 03:47:48.888061 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert podName:4223a956-7692-4bcc-8193-02312792b1f9 nodeName:}" failed. No retries permitted until 2026-01-07 03:48:04.88799047 +0000 UTC m=+931.453685365 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert") pod "infra-operator-controller-manager-6d99759cf-c5hk9" (UID: "4223a956-7692-4bcc-8193-02312792b1f9") : secret "infra-operator-webhook-server-cert" not found Jan 07 03:47:49 crc kubenswrapper[4980]: I0107 03:47:49.090517 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72mjfc\" (UID: \"b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" Jan 07 03:47:49 crc kubenswrapper[4980]: E0107 03:47:49.090703 4980 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 07 03:47:49 crc kubenswrapper[4980]: E0107 03:47:49.090774 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert podName:b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c nodeName:}" failed. No retries permitted until 2026-01-07 03:48:05.090755534 +0000 UTC m=+931.656450269 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" (UID: "b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 07 03:47:49 crc kubenswrapper[4980]: I0107 03:47:49.496174 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:47:49 crc kubenswrapper[4980]: E0107 03:47:49.496345 4980 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 07 03:47:49 crc kubenswrapper[4980]: E0107 03:47:49.496421 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs podName:4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3 nodeName:}" failed. No retries permitted until 2026-01-07 03:48:05.496403155 +0000 UTC m=+932.062097890 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs") pod "openstack-operator-controller-manager-7bbf496545-vdwhj" (UID: "4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3") : secret "webhook-server-cert" not found Jan 07 03:47:49 crc kubenswrapper[4980]: I0107 03:47:49.597880 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:47:49 crc kubenswrapper[4980]: E0107 03:47:49.598111 4980 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 07 03:47:49 crc kubenswrapper[4980]: E0107 03:47:49.598224 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs podName:4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3 nodeName:}" failed. No retries permitted until 2026-01-07 03:48:05.598192569 +0000 UTC m=+932.163887334 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs") pod "openstack-operator-controller-manager-7bbf496545-vdwhj" (UID: "4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3") : secret "metrics-server-cert" not found Jan 07 03:47:52 crc kubenswrapper[4980]: I0107 03:47:52.238475 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kdwhd"] Jan 07 03:47:52 crc kubenswrapper[4980]: I0107 03:47:52.240801 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kdwhd" Jan 07 03:47:52 crc kubenswrapper[4980]: I0107 03:47:52.256648 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kdwhd"] Jan 07 03:47:52 crc kubenswrapper[4980]: I0107 03:47:52.346840 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54bfae0d-a9c8-4dd6-802e-e5de80d8299b-utilities\") pod \"community-operators-kdwhd\" (UID: \"54bfae0d-a9c8-4dd6-802e-e5de80d8299b\") " pod="openshift-marketplace/community-operators-kdwhd" Jan 07 03:47:52 crc kubenswrapper[4980]: I0107 03:47:52.346908 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54bfae0d-a9c8-4dd6-802e-e5de80d8299b-catalog-content\") pod \"community-operators-kdwhd\" (UID: \"54bfae0d-a9c8-4dd6-802e-e5de80d8299b\") " pod="openshift-marketplace/community-operators-kdwhd" Jan 07 03:47:52 crc kubenswrapper[4980]: I0107 03:47:52.346976 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gchn\" (UniqueName: \"kubernetes.io/projected/54bfae0d-a9c8-4dd6-802e-e5de80d8299b-kube-api-access-6gchn\") pod \"community-operators-kdwhd\" (UID: \"54bfae0d-a9c8-4dd6-802e-e5de80d8299b\") " pod="openshift-marketplace/community-operators-kdwhd" Jan 07 03:47:52 crc kubenswrapper[4980]: I0107 03:47:52.449290 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54bfae0d-a9c8-4dd6-802e-e5de80d8299b-utilities\") pod \"community-operators-kdwhd\" (UID: \"54bfae0d-a9c8-4dd6-802e-e5de80d8299b\") " pod="openshift-marketplace/community-operators-kdwhd" Jan 07 03:47:52 crc kubenswrapper[4980]: I0107 03:47:52.449368 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54bfae0d-a9c8-4dd6-802e-e5de80d8299b-catalog-content\") pod \"community-operators-kdwhd\" (UID: \"54bfae0d-a9c8-4dd6-802e-e5de80d8299b\") " pod="openshift-marketplace/community-operators-kdwhd" Jan 07 03:47:52 crc kubenswrapper[4980]: I0107 03:47:52.449436 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gchn\" (UniqueName: \"kubernetes.io/projected/54bfae0d-a9c8-4dd6-802e-e5de80d8299b-kube-api-access-6gchn\") pod \"community-operators-kdwhd\" (UID: \"54bfae0d-a9c8-4dd6-802e-e5de80d8299b\") " pod="openshift-marketplace/community-operators-kdwhd" Jan 07 03:47:52 crc kubenswrapper[4980]: I0107 03:47:52.450090 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54bfae0d-a9c8-4dd6-802e-e5de80d8299b-utilities\") pod \"community-operators-kdwhd\" (UID: \"54bfae0d-a9c8-4dd6-802e-e5de80d8299b\") " pod="openshift-marketplace/community-operators-kdwhd" Jan 07 03:47:52 crc kubenswrapper[4980]: I0107 03:47:52.450134 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54bfae0d-a9c8-4dd6-802e-e5de80d8299b-catalog-content\") pod \"community-operators-kdwhd\" (UID: \"54bfae0d-a9c8-4dd6-802e-e5de80d8299b\") " pod="openshift-marketplace/community-operators-kdwhd" Jan 07 03:47:52 crc kubenswrapper[4980]: I0107 03:47:52.475703 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gchn\" (UniqueName: \"kubernetes.io/projected/54bfae0d-a9c8-4dd6-802e-e5de80d8299b-kube-api-access-6gchn\") pod \"community-operators-kdwhd\" (UID: \"54bfae0d-a9c8-4dd6-802e-e5de80d8299b\") " pod="openshift-marketplace/community-operators-kdwhd" Jan 07 03:47:52 crc kubenswrapper[4980]: I0107 03:47:52.584842 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kdwhd" Jan 07 03:47:55 crc kubenswrapper[4980]: E0107 03:47:55.489613 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:202756538820b5fa874d07a71ece4f048f41ccca8228d359c8cd25a00e9c0848" Jan 07 03:47:55 crc kubenswrapper[4980]: E0107 03:47:55.490406 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:202756538820b5fa874d07a71ece4f048f41ccca8228d359c8cd25a00e9c0848,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-882dk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-f99f54bc8-s5s7g_openstack-operators(0b63f351-f7ac-44a4-8a65-a6357043af12): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 07 03:47:55 crc kubenswrapper[4980]: E0107 03:47:55.491594 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-s5s7g" podUID="0b63f351-f7ac-44a4-8a65-a6357043af12" Jan 07 03:47:55 crc kubenswrapper[4980]: E0107 03:47:55.867641 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:202756538820b5fa874d07a71ece4f048f41ccca8228d359c8cd25a00e9c0848\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-s5s7g" podUID="0b63f351-f7ac-44a4-8a65-a6357043af12" Jan 07 03:47:57 crc kubenswrapper[4980]: E0107 03:47:57.332307 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04" Jan 07 03:47:57 crc kubenswrapper[4980]: E0107 03:47:57.333245 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xf98p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-658dd65b86-vghwj_openstack-operators(93a3e6e3-bd9b-4883-923c-6d58ae83000d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 07 03:47:57 crc kubenswrapper[4980]: E0107 03:47:57.334619 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vghwj" podUID="93a3e6e3-bd9b-4883-923c-6d58ae83000d" Jan 07 03:47:57 crc kubenswrapper[4980]: E0107 03:47:57.880134 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04\\\"\"" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vghwj" podUID="93a3e6e3-bd9b-4883-923c-6d58ae83000d" Jan 07 03:47:58 crc kubenswrapper[4980]: E0107 03:47:58.214168 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c" Jan 07 03:47:58 crc kubenswrapper[4980]: E0107 03:47:58.214359 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2vdxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-598945d5b8-5jzx4_openstack-operators(86933336-6f6c-4327-bcde-a4d1a6caba77): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 07 03:47:58 crc kubenswrapper[4980]: E0107 03:47:58.215719 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-5jzx4" podUID="86933336-6f6c-4327-bcde-a4d1a6caba77" Jan 07 03:47:58 crc kubenswrapper[4980]: E0107 03:47:58.916071 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-5jzx4" podUID="86933336-6f6c-4327-bcde-a4d1a6caba77" Jan 07 03:47:58 crc kubenswrapper[4980]: E0107 03:47:58.929781 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:b7111c690e8fda3cb0c5969bcfa68308907fd0cf05f73ecdcb9ac1423aa7bba3" Jan 07 03:47:58 crc kubenswrapper[4980]: E0107 03:47:58.930061 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:b7111c690e8fda3cb0c5969bcfa68308907fd0cf05f73ecdcb9ac1423aa7bba3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bhb95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-7f5ddd8d7b-r9bm7_openstack-operators(509933ce-8dca-4f14-bdc4-a5f1608954b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 07 03:47:58 crc kubenswrapper[4980]: E0107 03:47:58.931651 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-r9bm7" podUID="509933ce-8dca-4f14-bdc4-a5f1608954b3" Jan 07 03:47:59 crc kubenswrapper[4980]: E0107 03:47:59.921120 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:b7111c690e8fda3cb0c5969bcfa68308907fd0cf05f73ecdcb9ac1423aa7bba3\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-r9bm7" podUID="509933ce-8dca-4f14-bdc4-a5f1608954b3" Jan 07 03:48:00 crc kubenswrapper[4980]: E0107 03:48:00.918586 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Jan 07 03:48:00 crc kubenswrapper[4980]: E0107 03:48:00.919148 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pmr7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bf6d4f946-fdnhb_openstack-operators(c86a562f-bdd6-4463-8edc-6ce72f41af16): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 07 03:48:00 crc kubenswrapper[4980]: E0107 03:48:00.920405 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-fdnhb" podUID="c86a562f-bdd6-4463-8edc-6ce72f41af16" Jan 07 03:48:01 crc kubenswrapper[4980]: E0107 03:48:01.484008 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Jan 07 03:48:01 crc kubenswrapper[4980]: E0107 03:48:01.484323 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zxzpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-68c649d9d-fd8dn_openstack-operators(881d9164-37f7-48da-b203-a2e5db8e2d23): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 07 03:48:01 crc kubenswrapper[4980]: E0107 03:48:01.485744 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-fd8dn" podUID="881d9164-37f7-48da-b203-a2e5db8e2d23" Jan 07 03:48:01 crc kubenswrapper[4980]: E0107 03:48:01.935579 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-fd8dn" podUID="881d9164-37f7-48da-b203-a2e5db8e2d23" Jan 07 03:48:01 crc kubenswrapper[4980]: E0107 03:48:01.935621 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-fdnhb" podUID="c86a562f-bdd6-4463-8edc-6ce72f41af16" Jan 07 03:48:02 crc kubenswrapper[4980]: E0107 03:48:02.095807 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c" Jan 07 03:48:02 crc kubenswrapper[4980]: E0107 03:48:02.096028 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jwfll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-568985c78-ndxck_openstack-operators(d8d586b5-b752-4122-99af-ba4ce3bbad29): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 07 03:48:02 crc kubenswrapper[4980]: E0107 03:48:02.098091 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-ndxck" podUID="d8d586b5-b752-4122-99af-ba4ce3bbad29" Jan 07 03:48:02 crc kubenswrapper[4980]: E0107 03:48:02.940578 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-ndxck" podUID="d8d586b5-b752-4122-99af-ba4ce3bbad29" Jan 07 03:48:04 crc kubenswrapper[4980]: I0107 03:48:04.957267 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert\") pod \"infra-operator-controller-manager-6d99759cf-c5hk9\" (UID: \"4223a956-7692-4bcc-8193-02312792b1f9\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" Jan 07 03:48:04 crc kubenswrapper[4980]: I0107 03:48:04.970514 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4223a956-7692-4bcc-8193-02312792b1f9-cert\") pod \"infra-operator-controller-manager-6d99759cf-c5hk9\" (UID: \"4223a956-7692-4bcc-8193-02312792b1f9\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" Jan 07 03:48:05 crc kubenswrapper[4980]: I0107 03:48:05.090004 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-s6jd7" Jan 07 03:48:05 crc kubenswrapper[4980]: I0107 03:48:05.098216 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" Jan 07 03:48:05 crc kubenswrapper[4980]: I0107 03:48:05.161610 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72mjfc\" (UID: \"b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" Jan 07 03:48:05 crc kubenswrapper[4980]: I0107 03:48:05.166752 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72mjfc\" (UID: \"b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" Jan 07 03:48:05 crc kubenswrapper[4980]: I0107 03:48:05.398820 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-hxkhl" Jan 07 03:48:05 crc kubenswrapper[4980]: I0107 03:48:05.405707 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" Jan 07 03:48:05 crc kubenswrapper[4980]: E0107 03:48:05.429168 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Jan 07 03:48:05 crc kubenswrapper[4980]: E0107 03:48:05.429429 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gvdkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5fbbf8b6cc-55fcj_openstack-operators(96049d0d-7c90-4cab-a18c-5fbd4e9f8373): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 07 03:48:05 crc kubenswrapper[4980]: E0107 03:48:05.430645 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-55fcj" podUID="96049d0d-7c90-4cab-a18c-5fbd4e9f8373" Jan 07 03:48:05 crc kubenswrapper[4980]: I0107 03:48:05.569792 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:48:05 crc kubenswrapper[4980]: I0107 03:48:05.575166 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-webhook-certs\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:48:05 crc kubenswrapper[4980]: I0107 03:48:05.671184 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:48:05 crc kubenswrapper[4980]: I0107 03:48:05.676682 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3-metrics-certs\") pod \"openstack-operator-controller-manager-7bbf496545-vdwhj\" (UID: \"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3\") " pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:48:05 crc kubenswrapper[4980]: E0107 03:48:05.961177 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-55fcj" podUID="96049d0d-7c90-4cab-a18c-5fbd4e9f8373" Jan 07 03:48:05 crc kubenswrapper[4980]: I0107 03:48:05.979715 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-h6l9p" Jan 07 03:48:05 crc kubenswrapper[4980]: I0107 03:48:05.985312 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:48:06 crc kubenswrapper[4980]: I0107 03:48:06.543220 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:48:06 crc kubenswrapper[4980]: I0107 03:48:06.543286 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:48:06 crc kubenswrapper[4980]: I0107 03:48:06.543333 4980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:48:06 crc kubenswrapper[4980]: I0107 03:48:06.544403 4980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2aeadb84272b7976dfcd584a184be5b65ae16f36aa28ca68277c09134c73d7e7"} pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 03:48:06 crc kubenswrapper[4980]: I0107 03:48:06.544463 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" containerID="cri-o://2aeadb84272b7976dfcd584a184be5b65ae16f36aa28ca68277c09134c73d7e7" gracePeriod=600 Jan 07 03:48:06 crc kubenswrapper[4980]: I0107 03:48:06.970510 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerID="2aeadb84272b7976dfcd584a184be5b65ae16f36aa28ca68277c09134c73d7e7" exitCode=0 Jan 07 03:48:06 crc kubenswrapper[4980]: I0107 03:48:06.970623 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerDied","Data":"2aeadb84272b7976dfcd584a184be5b65ae16f36aa28ca68277c09134c73d7e7"} Jan 07 03:48:06 crc kubenswrapper[4980]: I0107 03:48:06.971125 4980 scope.go:117] "RemoveContainer" containerID="22f87e8413daf7843826baa261082b343285cfe845501a26b14ff6b1f2751cb0" Jan 07 03:48:06 crc kubenswrapper[4980]: I0107 03:48:06.972989 4980 generic.go:334] "Generic (PLEG): container finished" podID="147444c2-604e-45f4-8e61-2e903599d08e" containerID="2b0a88348695dd4026c1585d18237e8403285ec0139bf7ba5c1753483b7cc358" exitCode=0 Jan 07 03:48:06 crc kubenswrapper[4980]: I0107 03:48:06.973052 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q4qsw" event={"ID":"147444c2-604e-45f4-8e61-2e903599d08e","Type":"ContainerDied","Data":"2b0a88348695dd4026c1585d18237e8403285ec0139bf7ba5c1753483b7cc358"} Jan 07 03:48:08 crc kubenswrapper[4980]: I0107 03:48:08.051058 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kdwhd"] Jan 07 03:48:08 crc kubenswrapper[4980]: I0107 03:48:08.093935 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9"] Jan 07 03:48:08 crc kubenswrapper[4980]: I0107 03:48:08.152911 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj"] Jan 07 03:48:08 crc kubenswrapper[4980]: W0107 03:48:08.155235 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4223a956_7692_4bcc_8193_02312792b1f9.slice/crio-216eb9e1c59f4d4744f388f236670e4debc03309634da80541d6b65572ede692 WatchSource:0}: Error finding container 216eb9e1c59f4d4744f388f236670e4debc03309634da80541d6b65572ede692: Status 404 returned error can't find the container with id 216eb9e1c59f4d4744f388f236670e4debc03309634da80541d6b65572ede692 Jan 07 03:48:08 crc kubenswrapper[4980]: W0107 03:48:08.176855 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cfd0d7f_4c37_4cd5_a97a_ceff58bd52a3.slice/crio-a931906e2cfc733b4981f9fc1e04ece17807d28969a05541581dd55d5d7db492 WatchSource:0}: Error finding container a931906e2cfc733b4981f9fc1e04ece17807d28969a05541581dd55d5d7db492: Status 404 returned error can't find the container with id a931906e2cfc733b4981f9fc1e04ece17807d28969a05541581dd55d5d7db492 Jan 07 03:48:08 crc kubenswrapper[4980]: I0107 03:48:08.249810 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc"] Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.002285 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fxr6h" event={"ID":"e4c6355b-ca56-47e9-897e-ed6b641d456a","Type":"ContainerStarted","Data":"7025862ee710754dbcf206f3704150239d79652310d16d7aa58d082b0443d849"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.009362 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-gnrv9" event={"ID":"920991d2-089f-4864-8237-9684c6282a04","Type":"ContainerStarted","Data":"06ccb953bc0a4ebed536bb32c3b973d9d3f773863deb32ca260f27ccd3668b26"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.009665 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-gnrv9" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.017823 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-b4dq4" event={"ID":"e3f2e1ae-fa58-4090-909d-4efdacb15545","Type":"ContainerStarted","Data":"9e3289d92e0a17210dbc173a62eeadd83381e9abe3fd13909ba65d22368c1c54"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.018100 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-b4dq4" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.020451 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cc9rk" event={"ID":"edf44de7-04e1-435c-a943-c47873d4e364","Type":"ContainerStarted","Data":"afbb0e4d5f234c60c2c3bd734aa67e1376029afd09ad9543fc62cfb8179c1fde"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.020734 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cc9rk" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.023414 4980 generic.go:334] "Generic (PLEG): container finished" podID="54bfae0d-a9c8-4dd6-802e-e5de80d8299b" containerID="8012149d3112c9b646d0d91886a12393233b9a85ba8da8443398ea286b8ed407" exitCode=0 Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.023531 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdwhd" event={"ID":"54bfae0d-a9c8-4dd6-802e-e5de80d8299b","Type":"ContainerDied","Data":"8012149d3112c9b646d0d91886a12393233b9a85ba8da8443398ea286b8ed407"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.023629 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdwhd" event={"ID":"54bfae0d-a9c8-4dd6-802e-e5de80d8299b","Type":"ContainerStarted","Data":"54c27d37b35fabf49d9d92f1e2b358b18c04651e954d3298aadf4d83d02a5dde"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.028686 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d7p5d" event={"ID":"28cf4151-f7be-4992-87f8-e34bf1d0a9c0","Type":"ContainerStarted","Data":"edef9bdde09048b696f791bbeb3315a32aa365ec62d404133e108fe0ac7ab36e"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.028959 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d7p5d" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.031406 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fxr6h" podStartSLOduration=3.017080436 podStartE2EDuration="36.031389884s" podCreationTimestamp="2026-01-07 03:47:33 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.747320765 +0000 UTC m=+901.313015490" lastFinishedPulling="2026-01-07 03:48:07.761630193 +0000 UTC m=+934.327324938" observedRunningTime="2026-01-07 03:48:09.030714433 +0000 UTC m=+935.596409158" watchObservedRunningTime="2026-01-07 03:48:09.031389884 +0000 UTC m=+935.597084619" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.033216 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"40fed48537fb4dd350c71735c8360a409552809cda596d45f1ede2d146d2801a"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.038755 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-qlpkh" event={"ID":"c1ae6abf-8410-4816-b2a1-b6a9f0550eb2","Type":"ContainerStarted","Data":"9ec4a92c01ec2115a101f507e194513bbb19cd8c6c1cc72dcdee40f610b4f426"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.038998 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-qlpkh" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.042635 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" event={"ID":"4223a956-7692-4bcc-8193-02312792b1f9","Type":"ContainerStarted","Data":"216eb9e1c59f4d4744f388f236670e4debc03309634da80541d6b65572ede692"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.051009 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-b4dq4" podStartSLOduration=3.055997099 podStartE2EDuration="36.050978902s" podCreationTimestamp="2026-01-07 03:47:33 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.78923891 +0000 UTC m=+901.354933645" lastFinishedPulling="2026-01-07 03:48:07.784220713 +0000 UTC m=+934.349915448" observedRunningTime="2026-01-07 03:48:09.048493375 +0000 UTC m=+935.614188110" watchObservedRunningTime="2026-01-07 03:48:09.050978902 +0000 UTC m=+935.616673637" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.060099 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-8qlqr" event={"ID":"b6565eee-ab9b-4a1a-a5a8-6036df399731","Type":"ContainerStarted","Data":"44b4abbacb3d3b03e8496239be5438b4b6df4894c1c3ccd6303ea44799bce8ce"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.060294 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-8qlqr" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.066440 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-d25kc" event={"ID":"58abb189-9361-4eac-8663-55e110e21383","Type":"ContainerStarted","Data":"ae55bedfacde80e0b454319502a7621b5380bc7f67d7f7f4b125633c6d90ce9a"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.069760 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-d25kc" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.093022 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-kpmrn" event={"ID":"82ed1518-12d9-412b-86cc-03fbb1f74bd6","Type":"ContainerStarted","Data":"ece49e6e99adf6500b166d3cef74a798b519c1f1cbe852b75a1e8accc1e2cab6"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.093829 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-kpmrn" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.112864 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" event={"ID":"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3","Type":"ContainerStarted","Data":"67d5aafab3b595f1b6dc8817d8479b43034ec66102a649037a5b55913676ba5f"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.112921 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" event={"ID":"4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3","Type":"ContainerStarted","Data":"a931906e2cfc733b4981f9fc1e04ece17807d28969a05541581dd55d5d7db492"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.113632 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.122461 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cc9rk" podStartSLOduration=4.109629002 podStartE2EDuration="37.122437726s" podCreationTimestamp="2026-01-07 03:47:32 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.771030298 +0000 UTC m=+901.336725043" lastFinishedPulling="2026-01-07 03:48:07.783838992 +0000 UTC m=+934.349533767" observedRunningTime="2026-01-07 03:48:09.112995964 +0000 UTC m=+935.678690699" watchObservedRunningTime="2026-01-07 03:48:09.122437726 +0000 UTC m=+935.688132461" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.126771 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" event={"ID":"b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c","Type":"ContainerStarted","Data":"ba1e719da06c2c5e9e213e917ecfcd05e31061b0fd94443bbd7aa74165fe838b"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.143787 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mqsl2" event={"ID":"2e8333e2-a664-4f9a-8ddb-07e31ddc3020","Type":"ContainerStarted","Data":"154b7676e414f5eead60ebfcf289356e7309bc41c3dadbb8bb240f81fe1e06a6"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.144736 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mqsl2" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.147952 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-gnrv9" podStartSLOduration=3.5128756660000002 podStartE2EDuration="37.147927506s" podCreationTimestamp="2026-01-07 03:47:32 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.189676689 +0000 UTC m=+900.755371424" lastFinishedPulling="2026-01-07 03:48:07.824728519 +0000 UTC m=+934.390423264" observedRunningTime="2026-01-07 03:48:09.140130644 +0000 UTC m=+935.705825379" watchObservedRunningTime="2026-01-07 03:48:09.147927506 +0000 UTC m=+935.713622241" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.155388 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q4qsw" event={"ID":"147444c2-604e-45f4-8e61-2e903599d08e","Type":"ContainerStarted","Data":"70a6a1f1f18870958ca1903c8fce05f0be8bbc563abf174d4ff6e3bcdcbbe364"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.158572 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-8rjjc" event={"ID":"cc01f5c0-320a-4645-bb96-5bd8b6490e08","Type":"ContainerStarted","Data":"65b3f10ae4c993ebf39992ad494adb72a1484656bc12b9f65525536a20db60c1"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.159038 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-8rjjc" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.162466 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qj7hn" event={"ID":"81f997a4-1aea-45d5-bd2f-8e6d1e8fdc61","Type":"ContainerStarted","Data":"6e9fe23fbd1f8b6c65a4bb4e815d9a87cf8e295f0540c8a9d60dfb53c2adbb04"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.163605 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qj7hn" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.165106 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-s5s7g" event={"ID":"0b63f351-f7ac-44a4-8a65-a6357043af12","Type":"ContainerStarted","Data":"8662c638dff037dd052d510a63842504c613364e0bd220a9258f811eda717a5e"} Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.165492 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-s5s7g" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.206283 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d7p5d" podStartSLOduration=3.1792491529999998 podStartE2EDuration="36.206257503s" podCreationTimestamp="2026-01-07 03:47:33 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.725535462 +0000 UTC m=+901.291230197" lastFinishedPulling="2026-01-07 03:48:07.752543812 +0000 UTC m=+934.318238547" observedRunningTime="2026-01-07 03:48:09.176702778 +0000 UTC m=+935.742397513" watchObservedRunningTime="2026-01-07 03:48:09.206257503 +0000 UTC m=+935.771952238" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.211149 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-qlpkh" podStartSLOduration=3.212563063 podStartE2EDuration="36.211140525s" podCreationTimestamp="2026-01-07 03:47:33 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.7539637 +0000 UTC m=+901.319658445" lastFinishedPulling="2026-01-07 03:48:07.752541142 +0000 UTC m=+934.318235907" observedRunningTime="2026-01-07 03:48:09.199977429 +0000 UTC m=+935.765672164" watchObservedRunningTime="2026-01-07 03:48:09.211140525 +0000 UTC m=+935.776835260" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.249822 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" podStartSLOduration=36.249808394 podStartE2EDuration="36.249808394s" podCreationTimestamp="2026-01-07 03:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:48:09.244360645 +0000 UTC m=+935.810055380" watchObservedRunningTime="2026-01-07 03:48:09.249808394 +0000 UTC m=+935.815503129" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.284328 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-d25kc" podStartSLOduration=3.282544812 podStartE2EDuration="36.284307253s" podCreationTimestamp="2026-01-07 03:47:33 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.758396777 +0000 UTC m=+901.324091512" lastFinishedPulling="2026-01-07 03:48:07.760159198 +0000 UTC m=+934.325853953" observedRunningTime="2026-01-07 03:48:09.282251679 +0000 UTC m=+935.847946414" watchObservedRunningTime="2026-01-07 03:48:09.284307253 +0000 UTC m=+935.850001988" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.324105 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-8rjjc" podStartSLOduration=4.892753649 podStartE2EDuration="37.324087415s" podCreationTimestamp="2026-01-07 03:47:32 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.318499841 +0000 UTC m=+900.884194576" lastFinishedPulling="2026-01-07 03:48:06.749833607 +0000 UTC m=+933.315528342" observedRunningTime="2026-01-07 03:48:09.318862443 +0000 UTC m=+935.884557188" watchObservedRunningTime="2026-01-07 03:48:09.324087415 +0000 UTC m=+935.889782150" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.351125 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mqsl2" podStartSLOduration=9.136768203 podStartE2EDuration="37.351107743s" podCreationTimestamp="2026-01-07 03:47:32 +0000 UTC" firstStartedPulling="2026-01-07 03:47:33.880647098 +0000 UTC m=+900.446341833" lastFinishedPulling="2026-01-07 03:48:02.094986598 +0000 UTC m=+928.660681373" observedRunningTime="2026-01-07 03:48:09.346358325 +0000 UTC m=+935.912053060" watchObservedRunningTime="2026-01-07 03:48:09.351107743 +0000 UTC m=+935.916802478" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.368021 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-s5s7g" podStartSLOduration=4.161221996 podStartE2EDuration="37.368000977s" podCreationTimestamp="2026-01-07 03:47:32 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.595914445 +0000 UTC m=+901.161609180" lastFinishedPulling="2026-01-07 03:48:07.802693406 +0000 UTC m=+934.368388161" observedRunningTime="2026-01-07 03:48:09.366809359 +0000 UTC m=+935.932504094" watchObservedRunningTime="2026-01-07 03:48:09.368000977 +0000 UTC m=+935.933695702" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.405213 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-kpmrn" podStartSLOduration=5.239940147 podStartE2EDuration="37.405198339s" podCreationTimestamp="2026-01-07 03:47:32 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.581335004 +0000 UTC m=+901.147029739" lastFinishedPulling="2026-01-07 03:48:06.746593196 +0000 UTC m=+933.312287931" observedRunningTime="2026-01-07 03:48:09.403734024 +0000 UTC m=+935.969428749" watchObservedRunningTime="2026-01-07 03:48:09.405198339 +0000 UTC m=+935.970893074" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.427116 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qj7hn" podStartSLOduration=4.923834149 podStartE2EDuration="37.427096178s" podCreationTimestamp="2026-01-07 03:47:32 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.246608859 +0000 UTC m=+900.812303594" lastFinishedPulling="2026-01-07 03:48:06.749870888 +0000 UTC m=+933.315565623" observedRunningTime="2026-01-07 03:48:09.426975084 +0000 UTC m=+935.992669819" watchObservedRunningTime="2026-01-07 03:48:09.427096178 +0000 UTC m=+935.992790913" Jan 07 03:48:09 crc kubenswrapper[4980]: I0107 03:48:09.474143 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-8qlqr" podStartSLOduration=3.766712184 podStartE2EDuration="36.474123945s" podCreationTimestamp="2026-01-07 03:47:33 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.792623665 +0000 UTC m=+901.358318400" lastFinishedPulling="2026-01-07 03:48:07.500035436 +0000 UTC m=+934.065730161" observedRunningTime="2026-01-07 03:48:09.471878446 +0000 UTC m=+936.037573181" watchObservedRunningTime="2026-01-07 03:48:09.474123945 +0000 UTC m=+936.039818680" Jan 07 03:48:10 crc kubenswrapper[4980]: I0107 03:48:10.183826 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vghwj" event={"ID":"93a3e6e3-bd9b-4883-923c-6d58ae83000d","Type":"ContainerStarted","Data":"d563f4ac027ef37e1fcc96f78fa9457e8995c8bf66e6e952bc34f3d1cb0d703a"} Jan 07 03:48:10 crc kubenswrapper[4980]: I0107 03:48:10.184438 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vghwj" Jan 07 03:48:10 crc kubenswrapper[4980]: I0107 03:48:10.191322 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdwhd" event={"ID":"54bfae0d-a9c8-4dd6-802e-e5de80d8299b","Type":"ContainerStarted","Data":"19cec7a5f43924b276c852367b7fec40c5bb3d7890b8c272d1d76291eca676a1"} Jan 07 03:48:10 crc kubenswrapper[4980]: I0107 03:48:10.193774 4980 generic.go:334] "Generic (PLEG): container finished" podID="147444c2-604e-45f4-8e61-2e903599d08e" containerID="70a6a1f1f18870958ca1903c8fce05f0be8bbc563abf174d4ff6e3bcdcbbe364" exitCode=0 Jan 07 03:48:10 crc kubenswrapper[4980]: I0107 03:48:10.193842 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q4qsw" event={"ID":"147444c2-604e-45f4-8e61-2e903599d08e","Type":"ContainerDied","Data":"70a6a1f1f18870958ca1903c8fce05f0be8bbc563abf174d4ff6e3bcdcbbe364"} Jan 07 03:48:10 crc kubenswrapper[4980]: I0107 03:48:10.204327 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vghwj" podStartSLOduration=3.239785475 podStartE2EDuration="38.204310385s" podCreationTimestamp="2026-01-07 03:47:32 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.303170847 +0000 UTC m=+900.868865582" lastFinishedPulling="2026-01-07 03:48:09.267695757 +0000 UTC m=+935.833390492" observedRunningTime="2026-01-07 03:48:10.203385186 +0000 UTC m=+936.769079921" watchObservedRunningTime="2026-01-07 03:48:10.204310385 +0000 UTC m=+936.770005120" Jan 07 03:48:11 crc kubenswrapper[4980]: I0107 03:48:11.203519 4980 generic.go:334] "Generic (PLEG): container finished" podID="54bfae0d-a9c8-4dd6-802e-e5de80d8299b" containerID="19cec7a5f43924b276c852367b7fec40c5bb3d7890b8c272d1d76291eca676a1" exitCode=0 Jan 07 03:48:11 crc kubenswrapper[4980]: I0107 03:48:11.203591 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdwhd" event={"ID":"54bfae0d-a9c8-4dd6-802e-e5de80d8299b","Type":"ContainerDied","Data":"19cec7a5f43924b276c852367b7fec40c5bb3d7890b8c272d1d76291eca676a1"} Jan 07 03:48:12 crc kubenswrapper[4980]: I0107 03:48:12.212612 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" event={"ID":"b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c","Type":"ContainerStarted","Data":"c9382abd9059f40bcfa7e226d2525c1e02d8f54f0c847b73498dcb85a63654b2"} Jan 07 03:48:12 crc kubenswrapper[4980]: I0107 03:48:12.212994 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" Jan 07 03:48:12 crc kubenswrapper[4980]: I0107 03:48:12.214954 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q4qsw" event={"ID":"147444c2-604e-45f4-8e61-2e903599d08e","Type":"ContainerStarted","Data":"d9a1400d28ac0f6d08a9155fb80ae7ba6031f4aebbe99e30df55fcbeb749b32e"} Jan 07 03:48:12 crc kubenswrapper[4980]: I0107 03:48:12.218040 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" event={"ID":"4223a956-7692-4bcc-8193-02312792b1f9","Type":"ContainerStarted","Data":"bf88484d9fc2a855fb277584dff603ef6645a2b88215f4e0cb47791a0bf7ec15"} Jan 07 03:48:12 crc kubenswrapper[4980]: I0107 03:48:12.219031 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" Jan 07 03:48:12 crc kubenswrapper[4980]: I0107 03:48:12.235306 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" podStartSLOduration=35.730823869 podStartE2EDuration="39.235286046s" podCreationTimestamp="2026-01-07 03:47:33 +0000 UTC" firstStartedPulling="2026-01-07 03:48:08.291939588 +0000 UTC m=+934.857634323" lastFinishedPulling="2026-01-07 03:48:11.796401765 +0000 UTC m=+938.362096500" observedRunningTime="2026-01-07 03:48:12.232550151 +0000 UTC m=+938.798244886" watchObservedRunningTime="2026-01-07 03:48:12.235286046 +0000 UTC m=+938.800980791" Jan 07 03:48:12 crc kubenswrapper[4980]: I0107 03:48:12.267519 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q4qsw" podStartSLOduration=32.969070023 podStartE2EDuration="37.267499215s" podCreationTimestamp="2026-01-07 03:47:35 +0000 UTC" firstStartedPulling="2026-01-07 03:48:07.500110509 +0000 UTC m=+934.065805244" lastFinishedPulling="2026-01-07 03:48:11.798539701 +0000 UTC m=+938.364234436" observedRunningTime="2026-01-07 03:48:12.265446812 +0000 UTC m=+938.831141577" watchObservedRunningTime="2026-01-07 03:48:12.267499215 +0000 UTC m=+938.833193950" Jan 07 03:48:12 crc kubenswrapper[4980]: I0107 03:48:12.316623 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" podStartSLOduration=36.677906038 podStartE2EDuration="40.316602476s" podCreationTimestamp="2026-01-07 03:47:32 +0000 UTC" firstStartedPulling="2026-01-07 03:48:08.159293637 +0000 UTC m=+934.724988372" lastFinishedPulling="2026-01-07 03:48:11.797990075 +0000 UTC m=+938.363684810" observedRunningTime="2026-01-07 03:48:12.29603556 +0000 UTC m=+938.861730315" watchObservedRunningTime="2026-01-07 03:48:12.316602476 +0000 UTC m=+938.882297211" Jan 07 03:48:13 crc kubenswrapper[4980]: I0107 03:48:13.131339 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mqsl2" Jan 07 03:48:13 crc kubenswrapper[4980]: I0107 03:48:13.136045 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qj7hn" Jan 07 03:48:13 crc kubenswrapper[4980]: I0107 03:48:13.173622 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-gnrv9" Jan 07 03:48:13 crc kubenswrapper[4980]: I0107 03:48:13.231335 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdwhd" event={"ID":"54bfae0d-a9c8-4dd6-802e-e5de80d8299b","Type":"ContainerStarted","Data":"afa88e7aeb687cd04d0213027d9b5b05b61865190e5b848c7b28b2e97f0f118f"} Jan 07 03:48:13 crc kubenswrapper[4980]: I0107 03:48:13.257583 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kdwhd" podStartSLOduration=18.063060816 podStartE2EDuration="21.257550057s" podCreationTimestamp="2026-01-07 03:47:52 +0000 UTC" firstStartedPulling="2026-01-07 03:48:09.02575587 +0000 UTC m=+935.591450605" lastFinishedPulling="2026-01-07 03:48:12.220245091 +0000 UTC m=+938.785939846" observedRunningTime="2026-01-07 03:48:13.251381957 +0000 UTC m=+939.817076692" watchObservedRunningTime="2026-01-07 03:48:13.257550057 +0000 UTC m=+939.823244782" Jan 07 03:48:13 crc kubenswrapper[4980]: I0107 03:48:13.372189 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-s5s7g" Jan 07 03:48:13 crc kubenswrapper[4980]: I0107 03:48:13.435896 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-8rjjc" Jan 07 03:48:13 crc kubenswrapper[4980]: I0107 03:48:13.532903 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cc9rk" Jan 07 03:48:13 crc kubenswrapper[4980]: I0107 03:48:13.557830 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-kpmrn" Jan 07 03:48:13 crc kubenswrapper[4980]: I0107 03:48:13.651977 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-d25kc" Jan 07 03:48:13 crc kubenswrapper[4980]: I0107 03:48:13.779529 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-b4dq4" Jan 07 03:48:13 crc kubenswrapper[4980]: I0107 03:48:13.861914 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d7p5d" Jan 07 03:48:13 crc kubenswrapper[4980]: I0107 03:48:13.933897 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-qlpkh" Jan 07 03:48:13 crc kubenswrapper[4980]: I0107 03:48:13.946882 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-8qlqr" Jan 07 03:48:14 crc kubenswrapper[4980]: I0107 03:48:14.236625 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-5jzx4" event={"ID":"86933336-6f6c-4327-bcde-a4d1a6caba77","Type":"ContainerStarted","Data":"64aab1bf6c65b0d6d4367c23df351a124ac3d1a5369b2ed2ed173964767d123b"} Jan 07 03:48:14 crc kubenswrapper[4980]: I0107 03:48:14.236863 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-5jzx4" Jan 07 03:48:14 crc kubenswrapper[4980]: I0107 03:48:14.238624 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-r9bm7" event={"ID":"509933ce-8dca-4f14-bdc4-a5f1608954b3","Type":"ContainerStarted","Data":"fafa3aed90ebcb10894ef65ea50802d92f54584372a68cf259c747e8de52ed8e"} Jan 07 03:48:14 crc kubenswrapper[4980]: I0107 03:48:14.238991 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-r9bm7" Jan 07 03:48:14 crc kubenswrapper[4980]: I0107 03:48:14.262544 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-5jzx4" podStartSLOduration=3.5337088100000003 podStartE2EDuration="42.262528833s" podCreationTimestamp="2026-01-07 03:47:32 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.581899692 +0000 UTC m=+901.147594437" lastFinishedPulling="2026-01-07 03:48:13.310719725 +0000 UTC m=+939.876414460" observedRunningTime="2026-01-07 03:48:14.258410396 +0000 UTC m=+940.824105141" watchObservedRunningTime="2026-01-07 03:48:14.262528833 +0000 UTC m=+940.828223568" Jan 07 03:48:14 crc kubenswrapper[4980]: I0107 03:48:14.295691 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-r9bm7" podStartSLOduration=3.301693491 podStartE2EDuration="42.29567291s" podCreationTimestamp="2026-01-07 03:47:32 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.316134718 +0000 UTC m=+900.881829453" lastFinishedPulling="2026-01-07 03:48:13.310114137 +0000 UTC m=+939.875808872" observedRunningTime="2026-01-07 03:48:14.289486379 +0000 UTC m=+940.855181114" watchObservedRunningTime="2026-01-07 03:48:14.29567291 +0000 UTC m=+940.861367645" Jan 07 03:48:15 crc kubenswrapper[4980]: I0107 03:48:15.945250 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q4qsw" Jan 07 03:48:15 crc kubenswrapper[4980]: I0107 03:48:15.947018 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q4qsw" Jan 07 03:48:15 crc kubenswrapper[4980]: I0107 03:48:15.993780 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7bbf496545-vdwhj" Jan 07 03:48:16 crc kubenswrapper[4980]: I0107 03:48:16.010121 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q4qsw" Jan 07 03:48:16 crc kubenswrapper[4980]: I0107 03:48:16.257970 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-fd8dn" event={"ID":"881d9164-37f7-48da-b203-a2e5db8e2d23","Type":"ContainerStarted","Data":"d9012d663157b1548ab8856d075c69fdc48c5be7c8602244752a4e52dc328ef3"} Jan 07 03:48:16 crc kubenswrapper[4980]: I0107 03:48:16.258293 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-fd8dn" Jan 07 03:48:16 crc kubenswrapper[4980]: I0107 03:48:16.278091 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-fd8dn" podStartSLOduration=3.353837905 podStartE2EDuration="43.278071176s" podCreationTimestamp="2026-01-07 03:47:33 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.589893669 +0000 UTC m=+901.155588404" lastFinishedPulling="2026-01-07 03:48:14.51412695 +0000 UTC m=+941.079821675" observedRunningTime="2026-01-07 03:48:16.273464224 +0000 UTC m=+942.839158959" watchObservedRunningTime="2026-01-07 03:48:16.278071176 +0000 UTC m=+942.843765911" Jan 07 03:48:16 crc kubenswrapper[4980]: I0107 03:48:16.325580 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q4qsw" Jan 07 03:48:16 crc kubenswrapper[4980]: I0107 03:48:16.987245 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q4qsw"] Jan 07 03:48:17 crc kubenswrapper[4980]: I0107 03:48:17.738682 4980 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 03:48:18 crc kubenswrapper[4980]: I0107 03:48:18.275146 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q4qsw" podUID="147444c2-604e-45f4-8e61-2e903599d08e" containerName="registry-server" containerID="cri-o://d9a1400d28ac0f6d08a9155fb80ae7ba6031f4aebbe99e30df55fcbeb749b32e" gracePeriod=2 Jan 07 03:48:19 crc kubenswrapper[4980]: I0107 03:48:19.289224 4980 generic.go:334] "Generic (PLEG): container finished" podID="147444c2-604e-45f4-8e61-2e903599d08e" containerID="d9a1400d28ac0f6d08a9155fb80ae7ba6031f4aebbe99e30df55fcbeb749b32e" exitCode=0 Jan 07 03:48:19 crc kubenswrapper[4980]: I0107 03:48:19.289301 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q4qsw" event={"ID":"147444c2-604e-45f4-8e61-2e903599d08e","Type":"ContainerDied","Data":"d9a1400d28ac0f6d08a9155fb80ae7ba6031f4aebbe99e30df55fcbeb749b32e"} Jan 07 03:48:19 crc kubenswrapper[4980]: I0107 03:48:19.619896 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q4qsw" Jan 07 03:48:19 crc kubenswrapper[4980]: I0107 03:48:19.804399 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tfpg\" (UniqueName: \"kubernetes.io/projected/147444c2-604e-45f4-8e61-2e903599d08e-kube-api-access-4tfpg\") pod \"147444c2-604e-45f4-8e61-2e903599d08e\" (UID: \"147444c2-604e-45f4-8e61-2e903599d08e\") " Jan 07 03:48:19 crc kubenswrapper[4980]: I0107 03:48:19.804464 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/147444c2-604e-45f4-8e61-2e903599d08e-catalog-content\") pod \"147444c2-604e-45f4-8e61-2e903599d08e\" (UID: \"147444c2-604e-45f4-8e61-2e903599d08e\") " Jan 07 03:48:19 crc kubenswrapper[4980]: I0107 03:48:19.804521 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/147444c2-604e-45f4-8e61-2e903599d08e-utilities\") pod \"147444c2-604e-45f4-8e61-2e903599d08e\" (UID: \"147444c2-604e-45f4-8e61-2e903599d08e\") " Jan 07 03:48:19 crc kubenswrapper[4980]: I0107 03:48:19.806014 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/147444c2-604e-45f4-8e61-2e903599d08e-utilities" (OuterVolumeSpecName: "utilities") pod "147444c2-604e-45f4-8e61-2e903599d08e" (UID: "147444c2-604e-45f4-8e61-2e903599d08e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:48:19 crc kubenswrapper[4980]: I0107 03:48:19.818807 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147444c2-604e-45f4-8e61-2e903599d08e-kube-api-access-4tfpg" (OuterVolumeSpecName: "kube-api-access-4tfpg") pod "147444c2-604e-45f4-8e61-2e903599d08e" (UID: "147444c2-604e-45f4-8e61-2e903599d08e"). InnerVolumeSpecName "kube-api-access-4tfpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:48:19 crc kubenswrapper[4980]: I0107 03:48:19.858623 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/147444c2-604e-45f4-8e61-2e903599d08e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "147444c2-604e-45f4-8e61-2e903599d08e" (UID: "147444c2-604e-45f4-8e61-2e903599d08e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:48:19 crc kubenswrapper[4980]: I0107 03:48:19.907361 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tfpg\" (UniqueName: \"kubernetes.io/projected/147444c2-604e-45f4-8e61-2e903599d08e-kube-api-access-4tfpg\") on node \"crc\" DevicePath \"\"" Jan 07 03:48:19 crc kubenswrapper[4980]: I0107 03:48:19.907406 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/147444c2-604e-45f4-8e61-2e903599d08e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:48:19 crc kubenswrapper[4980]: I0107 03:48:19.907424 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/147444c2-604e-45f4-8e61-2e903599d08e-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:48:20 crc kubenswrapper[4980]: I0107 03:48:20.303408 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q4qsw" event={"ID":"147444c2-604e-45f4-8e61-2e903599d08e","Type":"ContainerDied","Data":"30e76656dac8662118ecc1bac7c666c06e488b4c51b1276c9ab7a2e6a405b3c3"} Jan 07 03:48:20 crc kubenswrapper[4980]: I0107 03:48:20.303508 4980 scope.go:117] "RemoveContainer" containerID="d9a1400d28ac0f6d08a9155fb80ae7ba6031f4aebbe99e30df55fcbeb749b32e" Jan 07 03:48:20 crc kubenswrapper[4980]: I0107 03:48:20.303517 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q4qsw" Jan 07 03:48:20 crc kubenswrapper[4980]: I0107 03:48:20.346738 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q4qsw"] Jan 07 03:48:20 crc kubenswrapper[4980]: I0107 03:48:20.353112 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q4qsw"] Jan 07 03:48:20 crc kubenswrapper[4980]: I0107 03:48:20.353387 4980 scope.go:117] "RemoveContainer" containerID="70a6a1f1f18870958ca1903c8fce05f0be8bbc563abf174d4ff6e3bcdcbbe364" Jan 07 03:48:20 crc kubenswrapper[4980]: I0107 03:48:20.404304 4980 scope.go:117] "RemoveContainer" containerID="2b0a88348695dd4026c1585d18237e8403285ec0139bf7ba5c1753483b7cc358" Jan 07 03:48:21 crc kubenswrapper[4980]: I0107 03:48:21.312073 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-fdnhb" event={"ID":"c86a562f-bdd6-4463-8edc-6ce72f41af16","Type":"ContainerStarted","Data":"59c0c4af63e67ebcc4269480c7a500f2bb41853c1cd557fb1b12d2879ceb4797"} Jan 07 03:48:21 crc kubenswrapper[4980]: I0107 03:48:21.313461 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-fdnhb" Jan 07 03:48:21 crc kubenswrapper[4980]: I0107 03:48:21.314498 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-55fcj" event={"ID":"96049d0d-7c90-4cab-a18c-5fbd4e9f8373","Type":"ContainerStarted","Data":"325f9faea3786dd713e242a551bf3a2846fd5a09bdb1f55b704fcd65c3344a8e"} Jan 07 03:48:21 crc kubenswrapper[4980]: I0107 03:48:21.314782 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-55fcj" Jan 07 03:48:21 crc kubenswrapper[4980]: I0107 03:48:21.317695 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-ndxck" event={"ID":"d8d586b5-b752-4122-99af-ba4ce3bbad29","Type":"ContainerStarted","Data":"9cdbcc8b5c23c7d54adc33c905de802d4155d61966973ab848f01aebc93f2ac0"} Jan 07 03:48:21 crc kubenswrapper[4980]: I0107 03:48:21.318035 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-568985c78-ndxck" Jan 07 03:48:21 crc kubenswrapper[4980]: I0107 03:48:21.337216 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-fdnhb" podStartSLOduration=2.6408593 podStartE2EDuration="48.337195934s" podCreationTimestamp="2026-01-07 03:47:33 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.591133568 +0000 UTC m=+901.156828303" lastFinishedPulling="2026-01-07 03:48:20.287470162 +0000 UTC m=+946.853164937" observedRunningTime="2026-01-07 03:48:21.332417676 +0000 UTC m=+947.898112451" watchObservedRunningTime="2026-01-07 03:48:21.337195934 +0000 UTC m=+947.902890669" Jan 07 03:48:21 crc kubenswrapper[4980]: I0107 03:48:21.351181 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-55fcj" podStartSLOduration=2.427814478 podStartE2EDuration="48.351161937s" podCreationTimestamp="2026-01-07 03:47:33 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.590172258 +0000 UTC m=+901.155866983" lastFinishedPulling="2026-01-07 03:48:20.513519707 +0000 UTC m=+947.079214442" observedRunningTime="2026-01-07 03:48:21.350476957 +0000 UTC m=+947.916171692" watchObservedRunningTime="2026-01-07 03:48:21.351161937 +0000 UTC m=+947.916856662" Jan 07 03:48:21 crc kubenswrapper[4980]: I0107 03:48:21.371434 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-568985c78-ndxck" podStartSLOduration=3.375103414 podStartE2EDuration="49.371412015s" podCreationTimestamp="2026-01-07 03:47:32 +0000 UTC" firstStartedPulling="2026-01-07 03:47:34.581917012 +0000 UTC m=+901.147611747" lastFinishedPulling="2026-01-07 03:48:20.578225623 +0000 UTC m=+947.143920348" observedRunningTime="2026-01-07 03:48:21.368604268 +0000 UTC m=+947.934299003" watchObservedRunningTime="2026-01-07 03:48:21.371412015 +0000 UTC m=+947.937106760" Jan 07 03:48:21 crc kubenswrapper[4980]: I0107 03:48:21.745662 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="147444c2-604e-45f4-8e61-2e903599d08e" path="/var/lib/kubelet/pods/147444c2-604e-45f4-8e61-2e903599d08e/volumes" Jan 07 03:48:22 crc kubenswrapper[4980]: I0107 03:48:22.585225 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kdwhd" Jan 07 03:48:22 crc kubenswrapper[4980]: I0107 03:48:22.585739 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kdwhd" Jan 07 03:48:22 crc kubenswrapper[4980]: I0107 03:48:22.679902 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kdwhd" Jan 07 03:48:23 crc kubenswrapper[4980]: I0107 03:48:23.161952 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vghwj" Jan 07 03:48:23 crc kubenswrapper[4980]: I0107 03:48:23.243284 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-r9bm7" Jan 07 03:48:23 crc kubenswrapper[4980]: I0107 03:48:23.404322 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kdwhd" Jan 07 03:48:23 crc kubenswrapper[4980]: I0107 03:48:23.498300 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-5jzx4" Jan 07 03:48:23 crc kubenswrapper[4980]: I0107 03:48:23.590087 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-fd8dn" Jan 07 03:48:24 crc kubenswrapper[4980]: I0107 03:48:24.376088 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kdwhd"] Jan 07 03:48:25 crc kubenswrapper[4980]: I0107 03:48:25.108014 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-c5hk9" Jan 07 03:48:25 crc kubenswrapper[4980]: I0107 03:48:25.350305 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kdwhd" podUID="54bfae0d-a9c8-4dd6-802e-e5de80d8299b" containerName="registry-server" containerID="cri-o://afa88e7aeb687cd04d0213027d9b5b05b61865190e5b848c7b28b2e97f0f118f" gracePeriod=2 Jan 07 03:48:25 crc kubenswrapper[4980]: I0107 03:48:25.413705 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72mjfc" Jan 07 03:48:29 crc kubenswrapper[4980]: I0107 03:48:29.386344 4980 generic.go:334] "Generic (PLEG): container finished" podID="54bfae0d-a9c8-4dd6-802e-e5de80d8299b" containerID="afa88e7aeb687cd04d0213027d9b5b05b61865190e5b848c7b28b2e97f0f118f" exitCode=0 Jan 07 03:48:29 crc kubenswrapper[4980]: I0107 03:48:29.386534 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdwhd" event={"ID":"54bfae0d-a9c8-4dd6-802e-e5de80d8299b","Type":"ContainerDied","Data":"afa88e7aeb687cd04d0213027d9b5b05b61865190e5b848c7b28b2e97f0f118f"} Jan 07 03:48:29 crc kubenswrapper[4980]: I0107 03:48:29.553851 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kdwhd" Jan 07 03:48:29 crc kubenswrapper[4980]: I0107 03:48:29.594936 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54bfae0d-a9c8-4dd6-802e-e5de80d8299b-catalog-content\") pod \"54bfae0d-a9c8-4dd6-802e-e5de80d8299b\" (UID: \"54bfae0d-a9c8-4dd6-802e-e5de80d8299b\") " Jan 07 03:48:29 crc kubenswrapper[4980]: I0107 03:48:29.595201 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54bfae0d-a9c8-4dd6-802e-e5de80d8299b-utilities\") pod \"54bfae0d-a9c8-4dd6-802e-e5de80d8299b\" (UID: \"54bfae0d-a9c8-4dd6-802e-e5de80d8299b\") " Jan 07 03:48:29 crc kubenswrapper[4980]: I0107 03:48:29.595337 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gchn\" (UniqueName: \"kubernetes.io/projected/54bfae0d-a9c8-4dd6-802e-e5de80d8299b-kube-api-access-6gchn\") pod \"54bfae0d-a9c8-4dd6-802e-e5de80d8299b\" (UID: \"54bfae0d-a9c8-4dd6-802e-e5de80d8299b\") " Jan 07 03:48:29 crc kubenswrapper[4980]: I0107 03:48:29.596056 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54bfae0d-a9c8-4dd6-802e-e5de80d8299b-utilities" (OuterVolumeSpecName: "utilities") pod "54bfae0d-a9c8-4dd6-802e-e5de80d8299b" (UID: "54bfae0d-a9c8-4dd6-802e-e5de80d8299b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:48:29 crc kubenswrapper[4980]: I0107 03:48:29.596696 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54bfae0d-a9c8-4dd6-802e-e5de80d8299b-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:48:29 crc kubenswrapper[4980]: I0107 03:48:29.610590 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54bfae0d-a9c8-4dd6-802e-e5de80d8299b-kube-api-access-6gchn" (OuterVolumeSpecName: "kube-api-access-6gchn") pod "54bfae0d-a9c8-4dd6-802e-e5de80d8299b" (UID: "54bfae0d-a9c8-4dd6-802e-e5de80d8299b"). InnerVolumeSpecName "kube-api-access-6gchn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:48:29 crc kubenswrapper[4980]: I0107 03:48:29.651476 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54bfae0d-a9c8-4dd6-802e-e5de80d8299b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54bfae0d-a9c8-4dd6-802e-e5de80d8299b" (UID: "54bfae0d-a9c8-4dd6-802e-e5de80d8299b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:48:29 crc kubenswrapper[4980]: I0107 03:48:29.698616 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gchn\" (UniqueName: \"kubernetes.io/projected/54bfae0d-a9c8-4dd6-802e-e5de80d8299b-kube-api-access-6gchn\") on node \"crc\" DevicePath \"\"" Jan 07 03:48:29 crc kubenswrapper[4980]: I0107 03:48:29.698671 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54bfae0d-a9c8-4dd6-802e-e5de80d8299b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:48:30 crc kubenswrapper[4980]: I0107 03:48:30.400254 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdwhd" event={"ID":"54bfae0d-a9c8-4dd6-802e-e5de80d8299b","Type":"ContainerDied","Data":"54c27d37b35fabf49d9d92f1e2b358b18c04651e954d3298aadf4d83d02a5dde"} Jan 07 03:48:30 crc kubenswrapper[4980]: I0107 03:48:30.400349 4980 scope.go:117] "RemoveContainer" containerID="afa88e7aeb687cd04d0213027d9b5b05b61865190e5b848c7b28b2e97f0f118f" Jan 07 03:48:30 crc kubenswrapper[4980]: I0107 03:48:30.400476 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kdwhd" Jan 07 03:48:30 crc kubenswrapper[4980]: I0107 03:48:30.450735 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kdwhd"] Jan 07 03:48:30 crc kubenswrapper[4980]: I0107 03:48:30.451764 4980 scope.go:117] "RemoveContainer" containerID="19cec7a5f43924b276c852367b7fec40c5bb3d7890b8c272d1d76291eca676a1" Jan 07 03:48:30 crc kubenswrapper[4980]: I0107 03:48:30.463958 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kdwhd"] Jan 07 03:48:30 crc kubenswrapper[4980]: I0107 03:48:30.477146 4980 scope.go:117] "RemoveContainer" containerID="8012149d3112c9b646d0d91886a12393233b9a85ba8da8443398ea286b8ed407" Jan 07 03:48:31 crc kubenswrapper[4980]: I0107 03:48:31.769030 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54bfae0d-a9c8-4dd6-802e-e5de80d8299b" path="/var/lib/kubelet/pods/54bfae0d-a9c8-4dd6-802e-e5de80d8299b/volumes" Jan 07 03:48:33 crc kubenswrapper[4980]: I0107 03:48:33.378166 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-568985c78-ndxck" Jan 07 03:48:33 crc kubenswrapper[4980]: I0107 03:48:33.564525 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-55fcj" Jan 07 03:48:33 crc kubenswrapper[4980]: I0107 03:48:33.625643 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-fdnhb" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.638166 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d4kk7"] Jan 07 03:48:48 crc kubenswrapper[4980]: E0107 03:48:48.639235 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54bfae0d-a9c8-4dd6-802e-e5de80d8299b" containerName="extract-content" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.639252 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="54bfae0d-a9c8-4dd6-802e-e5de80d8299b" containerName="extract-content" Jan 07 03:48:48 crc kubenswrapper[4980]: E0107 03:48:48.639267 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147444c2-604e-45f4-8e61-2e903599d08e" containerName="registry-server" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.639275 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="147444c2-604e-45f4-8e61-2e903599d08e" containerName="registry-server" Jan 07 03:48:48 crc kubenswrapper[4980]: E0107 03:48:48.639290 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54bfae0d-a9c8-4dd6-802e-e5de80d8299b" containerName="extract-utilities" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.639300 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="54bfae0d-a9c8-4dd6-802e-e5de80d8299b" containerName="extract-utilities" Jan 07 03:48:48 crc kubenswrapper[4980]: E0107 03:48:48.639318 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147444c2-604e-45f4-8e61-2e903599d08e" containerName="extract-content" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.639326 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="147444c2-604e-45f4-8e61-2e903599d08e" containerName="extract-content" Jan 07 03:48:48 crc kubenswrapper[4980]: E0107 03:48:48.639340 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54bfae0d-a9c8-4dd6-802e-e5de80d8299b" containerName="registry-server" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.639347 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="54bfae0d-a9c8-4dd6-802e-e5de80d8299b" containerName="registry-server" Jan 07 03:48:48 crc kubenswrapper[4980]: E0107 03:48:48.639360 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147444c2-604e-45f4-8e61-2e903599d08e" containerName="extract-utilities" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.639367 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="147444c2-604e-45f4-8e61-2e903599d08e" containerName="extract-utilities" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.639519 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="147444c2-604e-45f4-8e61-2e903599d08e" containerName="registry-server" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.639544 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="54bfae0d-a9c8-4dd6-802e-e5de80d8299b" containerName="registry-server" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.640436 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d4kk7" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.644201 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.644243 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.644282 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fbzcd" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.644201 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.654642 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d4kk7"] Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.710425 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k7q26"] Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.711848 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k7q26" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.714352 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.724896 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38f33686-ed85-4099-ab5e-cf9f02eada1e-config\") pod \"dnsmasq-dns-675f4bcbfc-d4kk7\" (UID: \"38f33686-ed85-4099-ab5e-cf9f02eada1e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d4kk7" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.724997 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8wfd\" (UniqueName: \"kubernetes.io/projected/38f33686-ed85-4099-ab5e-cf9f02eada1e-kube-api-access-l8wfd\") pod \"dnsmasq-dns-675f4bcbfc-d4kk7\" (UID: \"38f33686-ed85-4099-ab5e-cf9f02eada1e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d4kk7" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.727260 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k7q26"] Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.826912 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k7q26\" (UID: \"9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k7q26" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.827020 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gnhd\" (UniqueName: \"kubernetes.io/projected/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7-kube-api-access-7gnhd\") pod \"dnsmasq-dns-78dd6ddcc-k7q26\" (UID: \"9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k7q26" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.827062 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8wfd\" (UniqueName: \"kubernetes.io/projected/38f33686-ed85-4099-ab5e-cf9f02eada1e-kube-api-access-l8wfd\") pod \"dnsmasq-dns-675f4bcbfc-d4kk7\" (UID: \"38f33686-ed85-4099-ab5e-cf9f02eada1e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d4kk7" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.827112 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38f33686-ed85-4099-ab5e-cf9f02eada1e-config\") pod \"dnsmasq-dns-675f4bcbfc-d4kk7\" (UID: \"38f33686-ed85-4099-ab5e-cf9f02eada1e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d4kk7" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.827129 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7-config\") pod \"dnsmasq-dns-78dd6ddcc-k7q26\" (UID: \"9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k7q26" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.828120 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38f33686-ed85-4099-ab5e-cf9f02eada1e-config\") pod \"dnsmasq-dns-675f4bcbfc-d4kk7\" (UID: \"38f33686-ed85-4099-ab5e-cf9f02eada1e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d4kk7" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.845800 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8wfd\" (UniqueName: \"kubernetes.io/projected/38f33686-ed85-4099-ab5e-cf9f02eada1e-kube-api-access-l8wfd\") pod \"dnsmasq-dns-675f4bcbfc-d4kk7\" (UID: \"38f33686-ed85-4099-ab5e-cf9f02eada1e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d4kk7" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.928153 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k7q26\" (UID: \"9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k7q26" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.928268 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gnhd\" (UniqueName: \"kubernetes.io/projected/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7-kube-api-access-7gnhd\") pod \"dnsmasq-dns-78dd6ddcc-k7q26\" (UID: \"9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k7q26" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.928364 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7-config\") pod \"dnsmasq-dns-78dd6ddcc-k7q26\" (UID: \"9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k7q26" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.929474 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k7q26\" (UID: \"9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k7q26" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.929868 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7-config\") pod \"dnsmasq-dns-78dd6ddcc-k7q26\" (UID: \"9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k7q26" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.945061 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gnhd\" (UniqueName: \"kubernetes.io/projected/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7-kube-api-access-7gnhd\") pod \"dnsmasq-dns-78dd6ddcc-k7q26\" (UID: \"9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k7q26" Jan 07 03:48:48 crc kubenswrapper[4980]: I0107 03:48:48.971459 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d4kk7" Jan 07 03:48:49 crc kubenswrapper[4980]: I0107 03:48:49.038323 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k7q26" Jan 07 03:48:49 crc kubenswrapper[4980]: I0107 03:48:49.415137 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d4kk7"] Jan 07 03:48:49 crc kubenswrapper[4980]: I0107 03:48:49.512657 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k7q26"] Jan 07 03:48:49 crc kubenswrapper[4980]: W0107 03:48:49.514328 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ce5b0cf_5131_4e84_bfde_a5b87b09a8f7.slice/crio-a536951031a425d9406bd88660c46e653e725364335c8f9f84c3a97388b0715d WatchSource:0}: Error finding container a536951031a425d9406bd88660c46e653e725364335c8f9f84c3a97388b0715d: Status 404 returned error can't find the container with id a536951031a425d9406bd88660c46e653e725364335c8f9f84c3a97388b0715d Jan 07 03:48:49 crc kubenswrapper[4980]: I0107 03:48:49.552475 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-d4kk7" event={"ID":"38f33686-ed85-4099-ab5e-cf9f02eada1e","Type":"ContainerStarted","Data":"92073ab7d5fe25e16f416da38b5ce504070e124df06d14948e20f3ad29168b6e"} Jan 07 03:48:49 crc kubenswrapper[4980]: I0107 03:48:49.553420 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-k7q26" event={"ID":"9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7","Type":"ContainerStarted","Data":"a536951031a425d9406bd88660c46e653e725364335c8f9f84c3a97388b0715d"} Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.378327 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d4kk7"] Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.399531 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bnxpz"] Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.404188 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.424980 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bnxpz"] Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.467177 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2-config\") pod \"dnsmasq-dns-666b6646f7-bnxpz\" (UID: \"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2\") " pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.467250 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bnxpz\" (UID: \"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2\") " pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.467293 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtlmr\" (UniqueName: \"kubernetes.io/projected/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2-kube-api-access-mtlmr\") pod \"dnsmasq-dns-666b6646f7-bnxpz\" (UID: \"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2\") " pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.568358 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bnxpz\" (UID: \"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2\") " pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.568419 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtlmr\" (UniqueName: \"kubernetes.io/projected/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2-kube-api-access-mtlmr\") pod \"dnsmasq-dns-666b6646f7-bnxpz\" (UID: \"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2\") " pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.568491 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2-config\") pod \"dnsmasq-dns-666b6646f7-bnxpz\" (UID: \"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2\") " pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.569577 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bnxpz\" (UID: \"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2\") " pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.570482 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2-config\") pod \"dnsmasq-dns-666b6646f7-bnxpz\" (UID: \"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2\") " pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.591123 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtlmr\" (UniqueName: \"kubernetes.io/projected/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2-kube-api-access-mtlmr\") pod \"dnsmasq-dns-666b6646f7-bnxpz\" (UID: \"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2\") " pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.632620 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k7q26"] Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.654490 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fnkbg"] Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.655669 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.663752 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fnkbg"] Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.732529 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.772030 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-fnkbg\" (UID: \"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9\") " pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.772444 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9-config\") pod \"dnsmasq-dns-57d769cc4f-fnkbg\" (UID: \"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9\") " pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.772519 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntbnq\" (UniqueName: \"kubernetes.io/projected/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9-kube-api-access-ntbnq\") pod \"dnsmasq-dns-57d769cc4f-fnkbg\" (UID: \"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9\") " pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.875439 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-fnkbg\" (UID: \"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9\") " pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.875543 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9-config\") pod \"dnsmasq-dns-57d769cc4f-fnkbg\" (UID: \"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9\") " pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.875655 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntbnq\" (UniqueName: \"kubernetes.io/projected/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9-kube-api-access-ntbnq\") pod \"dnsmasq-dns-57d769cc4f-fnkbg\" (UID: \"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9\") " pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.876913 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-fnkbg\" (UID: \"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9\") " pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.877956 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9-config\") pod \"dnsmasq-dns-57d769cc4f-fnkbg\" (UID: \"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9\") " pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.896706 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntbnq\" (UniqueName: \"kubernetes.io/projected/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9-kube-api-access-ntbnq\") pod \"dnsmasq-dns-57d769cc4f-fnkbg\" (UID: \"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9\") " pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" Jan 07 03:48:51 crc kubenswrapper[4980]: I0107 03:48:51.977836 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.243642 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bnxpz"] Jan 07 03:48:52 crc kubenswrapper[4980]: W0107 03:48:52.258788 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55f2ec5a_9ef5_4f32_aaa9_c8c8972035c2.slice/crio-1ad6f31d6afff09d8457b24eb343cd449d2172d79c1a4470b1099972ade2ff0c WatchSource:0}: Error finding container 1ad6f31d6afff09d8457b24eb343cd449d2172d79c1a4470b1099972ade2ff0c: Status 404 returned error can't find the container with id 1ad6f31d6afff09d8457b24eb343cd449d2172d79c1a4470b1099972ade2ff0c Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.462723 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fnkbg"] Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.528439 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.530869 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.534850 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.535180 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.535361 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.542048 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.542211 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.542467 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.550960 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6cc44" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.551819 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.583930 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" event={"ID":"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2","Type":"ContainerStarted","Data":"1ad6f31d6afff09d8457b24eb343cd449d2172d79c1a4470b1099972ade2ff0c"} Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.586268 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.586308 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.586330 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6714f510-9927-47da-bc8b-3e4a3995cdc6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.586348 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.586464 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6714f510-9927-47da-bc8b-3e4a3995cdc6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.586514 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6714f510-9927-47da-bc8b-3e4a3995cdc6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.586545 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6714f510-9927-47da-bc8b-3e4a3995cdc6-config-data\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.586576 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.586604 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.586623 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6714f510-9927-47da-bc8b-3e4a3995cdc6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.586643 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbhhh\" (UniqueName: \"kubernetes.io/projected/6714f510-9927-47da-bc8b-3e4a3995cdc6-kube-api-access-zbhhh\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.688279 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6714f510-9927-47da-bc8b-3e4a3995cdc6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.688327 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbhhh\" (UniqueName: \"kubernetes.io/projected/6714f510-9927-47da-bc8b-3e4a3995cdc6-kube-api-access-zbhhh\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.688386 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.688415 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.688434 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6714f510-9927-47da-bc8b-3e4a3995cdc6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.688450 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.688472 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6714f510-9927-47da-bc8b-3e4a3995cdc6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.688507 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6714f510-9927-47da-bc8b-3e4a3995cdc6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.688535 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6714f510-9927-47da-bc8b-3e4a3995cdc6-config-data\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.688569 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.688599 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.689042 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.689256 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6714f510-9927-47da-bc8b-3e4a3995cdc6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.689524 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.689848 4980 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.694651 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6714f510-9927-47da-bc8b-3e4a3995cdc6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.695591 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6714f510-9927-47da-bc8b-3e4a3995cdc6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.695868 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6714f510-9927-47da-bc8b-3e4a3995cdc6-config-data\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.696963 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.699855 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.701462 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6714f510-9927-47da-bc8b-3e4a3995cdc6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.707823 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbhhh\" (UniqueName: \"kubernetes.io/projected/6714f510-9927-47da-bc8b-3e4a3995cdc6-kube-api-access-zbhhh\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.731659 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.794993 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.796672 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.799357 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.799363 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.799510 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.799828 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.801394 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.801651 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-lkjrg" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.802445 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.809804 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.863334 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.895177 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26440bb2-233e-47e3-bb46-9122523bce68-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.895224 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.895247 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxbsc\" (UniqueName: \"kubernetes.io/projected/26440bb2-233e-47e3-bb46-9122523bce68-kube-api-access-pxbsc\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.895277 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.895294 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.895312 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26440bb2-233e-47e3-bb46-9122523bce68-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.895328 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26440bb2-233e-47e3-bb46-9122523bce68-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.895344 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.895364 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.895385 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26440bb2-233e-47e3-bb46-9122523bce68-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.895419 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26440bb2-233e-47e3-bb46-9122523bce68-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.995774 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.995820 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.995844 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26440bb2-233e-47e3-bb46-9122523bce68-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.995862 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26440bb2-233e-47e3-bb46-9122523bce68-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.995877 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.995894 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.995915 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26440bb2-233e-47e3-bb46-9122523bce68-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.995948 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26440bb2-233e-47e3-bb46-9122523bce68-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.995990 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26440bb2-233e-47e3-bb46-9122523bce68-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.996007 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.996026 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxbsc\" (UniqueName: \"kubernetes.io/projected/26440bb2-233e-47e3-bb46-9122523bce68-kube-api-access-pxbsc\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.996500 4980 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.997677 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.998204 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26440bb2-233e-47e3-bb46-9122523bce68-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.998388 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26440bb2-233e-47e3-bb46-9122523bce68-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:52 crc kubenswrapper[4980]: I0107 03:48:52.998659 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:53 crc kubenswrapper[4980]: I0107 03:48:52.999196 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26440bb2-233e-47e3-bb46-9122523bce68-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:53 crc kubenswrapper[4980]: I0107 03:48:53.015054 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxbsc\" (UniqueName: \"kubernetes.io/projected/26440bb2-233e-47e3-bb46-9122523bce68-kube-api-access-pxbsc\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:53 crc kubenswrapper[4980]: I0107 03:48:53.015331 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26440bb2-233e-47e3-bb46-9122523bce68-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:53 crc kubenswrapper[4980]: I0107 03:48:53.016533 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:53 crc kubenswrapper[4980]: I0107 03:48:53.019649 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:53 crc kubenswrapper[4980]: I0107 03:48:53.022954 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:53 crc kubenswrapper[4980]: I0107 03:48:53.032274 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26440bb2-233e-47e3-bb46-9122523bce68-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:53 crc kubenswrapper[4980]: I0107 03:48:53.127320 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.077223 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.080504 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.084440 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-kk8db" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.086245 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.086254 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.087714 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.091240 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.102752 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.120888 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-config-data-default\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.120961 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tqnw\" (UniqueName: \"kubernetes.io/projected/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-kube-api-access-4tqnw\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.121015 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.121066 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.121109 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.121146 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-kolla-config\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.121209 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.121235 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.222085 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.222135 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.222176 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-config-data-default\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.222207 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tqnw\" (UniqueName: \"kubernetes.io/projected/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-kube-api-access-4tqnw\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.222227 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.222261 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.222282 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.222305 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-kolla-config\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.224493 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-kolla-config\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.224586 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-config-data-default\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.224861 4980 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.234499 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.235021 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.236582 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.244348 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tqnw\" (UniqueName: \"kubernetes.io/projected/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-kube-api-access-4tqnw\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.247232 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.260705 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2\") " pod="openstack/openstack-galera-0" Jan 07 03:48:54 crc kubenswrapper[4980]: I0107 03:48:54.405280 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.502932 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.505729 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.509711 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-pz65m" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.510282 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.510333 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.514587 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.524977 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.656406 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3336b2a3-f175-44d1-9771-adabe71eea6c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.656527 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3336b2a3-f175-44d1-9771-adabe71eea6c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.656616 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2cck\" (UniqueName: \"kubernetes.io/projected/3336b2a3-f175-44d1-9771-adabe71eea6c-kube-api-access-b2cck\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.656658 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3336b2a3-f175-44d1-9771-adabe71eea6c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.656692 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3336b2a3-f175-44d1-9771-adabe71eea6c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.656794 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.656828 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3336b2a3-f175-44d1-9771-adabe71eea6c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.656877 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3336b2a3-f175-44d1-9771-adabe71eea6c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.758793 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3336b2a3-f175-44d1-9771-adabe71eea6c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.758860 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3336b2a3-f175-44d1-9771-adabe71eea6c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.758891 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2cck\" (UniqueName: \"kubernetes.io/projected/3336b2a3-f175-44d1-9771-adabe71eea6c-kube-api-access-b2cck\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.758914 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3336b2a3-f175-44d1-9771-adabe71eea6c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.758932 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3336b2a3-f175-44d1-9771-adabe71eea6c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.758969 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.758984 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3336b2a3-f175-44d1-9771-adabe71eea6c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.759015 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3336b2a3-f175-44d1-9771-adabe71eea6c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.759794 4980 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.760249 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3336b2a3-f175-44d1-9771-adabe71eea6c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.761020 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3336b2a3-f175-44d1-9771-adabe71eea6c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.761516 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3336b2a3-f175-44d1-9771-adabe71eea6c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.763103 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3336b2a3-f175-44d1-9771-adabe71eea6c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.774316 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3336b2a3-f175-44d1-9771-adabe71eea6c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.774447 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3336b2a3-f175-44d1-9771-adabe71eea6c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.790174 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.792519 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2cck\" (UniqueName: \"kubernetes.io/projected/3336b2a3-f175-44d1-9771-adabe71eea6c-kube-api-access-b2cck\") pod \"openstack-cell1-galera-0\" (UID: \"3336b2a3-f175-44d1-9771-adabe71eea6c\") " pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.876012 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.945165 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.946199 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.949258 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.949798 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.949995 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-ggrq6" Jan 07 03:48:55 crc kubenswrapper[4980]: I0107 03:48:55.970662 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 07 03:48:56 crc kubenswrapper[4980]: I0107 03:48:56.063999 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf13ed1a-99f7-4574-a18a-7e559c48ddaa-memcached-tls-certs\") pod \"memcached-0\" (UID: \"cf13ed1a-99f7-4574-a18a-7e559c48ddaa\") " pod="openstack/memcached-0" Jan 07 03:48:56 crc kubenswrapper[4980]: I0107 03:48:56.064407 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cf13ed1a-99f7-4574-a18a-7e559c48ddaa-kolla-config\") pod \"memcached-0\" (UID: \"cf13ed1a-99f7-4574-a18a-7e559c48ddaa\") " pod="openstack/memcached-0" Jan 07 03:48:56 crc kubenswrapper[4980]: I0107 03:48:56.064460 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf13ed1a-99f7-4574-a18a-7e559c48ddaa-config-data\") pod \"memcached-0\" (UID: \"cf13ed1a-99f7-4574-a18a-7e559c48ddaa\") " pod="openstack/memcached-0" Jan 07 03:48:56 crc kubenswrapper[4980]: I0107 03:48:56.064485 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f69g4\" (UniqueName: \"kubernetes.io/projected/cf13ed1a-99f7-4574-a18a-7e559c48ddaa-kube-api-access-f69g4\") pod \"memcached-0\" (UID: \"cf13ed1a-99f7-4574-a18a-7e559c48ddaa\") " pod="openstack/memcached-0" Jan 07 03:48:56 crc kubenswrapper[4980]: I0107 03:48:56.064513 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf13ed1a-99f7-4574-a18a-7e559c48ddaa-combined-ca-bundle\") pod \"memcached-0\" (UID: \"cf13ed1a-99f7-4574-a18a-7e559c48ddaa\") " pod="openstack/memcached-0" Jan 07 03:48:56 crc kubenswrapper[4980]: I0107 03:48:56.166658 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf13ed1a-99f7-4574-a18a-7e559c48ddaa-combined-ca-bundle\") pod \"memcached-0\" (UID: \"cf13ed1a-99f7-4574-a18a-7e559c48ddaa\") " pod="openstack/memcached-0" Jan 07 03:48:56 crc kubenswrapper[4980]: I0107 03:48:56.166768 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf13ed1a-99f7-4574-a18a-7e559c48ddaa-memcached-tls-certs\") pod \"memcached-0\" (UID: \"cf13ed1a-99f7-4574-a18a-7e559c48ddaa\") " pod="openstack/memcached-0" Jan 07 03:48:56 crc kubenswrapper[4980]: I0107 03:48:56.166795 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cf13ed1a-99f7-4574-a18a-7e559c48ddaa-kolla-config\") pod \"memcached-0\" (UID: \"cf13ed1a-99f7-4574-a18a-7e559c48ddaa\") " pod="openstack/memcached-0" Jan 07 03:48:56 crc kubenswrapper[4980]: I0107 03:48:56.166827 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf13ed1a-99f7-4574-a18a-7e559c48ddaa-config-data\") pod \"memcached-0\" (UID: \"cf13ed1a-99f7-4574-a18a-7e559c48ddaa\") " pod="openstack/memcached-0" Jan 07 03:48:56 crc kubenswrapper[4980]: I0107 03:48:56.166862 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f69g4\" (UniqueName: \"kubernetes.io/projected/cf13ed1a-99f7-4574-a18a-7e559c48ddaa-kube-api-access-f69g4\") pod \"memcached-0\" (UID: \"cf13ed1a-99f7-4574-a18a-7e559c48ddaa\") " pod="openstack/memcached-0" Jan 07 03:48:56 crc kubenswrapper[4980]: I0107 03:48:56.167675 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cf13ed1a-99f7-4574-a18a-7e559c48ddaa-kolla-config\") pod \"memcached-0\" (UID: \"cf13ed1a-99f7-4574-a18a-7e559c48ddaa\") " pod="openstack/memcached-0" Jan 07 03:48:56 crc kubenswrapper[4980]: I0107 03:48:56.167963 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf13ed1a-99f7-4574-a18a-7e559c48ddaa-config-data\") pod \"memcached-0\" (UID: \"cf13ed1a-99f7-4574-a18a-7e559c48ddaa\") " pod="openstack/memcached-0" Jan 07 03:48:56 crc kubenswrapper[4980]: I0107 03:48:56.174922 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf13ed1a-99f7-4574-a18a-7e559c48ddaa-combined-ca-bundle\") pod \"memcached-0\" (UID: \"cf13ed1a-99f7-4574-a18a-7e559c48ddaa\") " pod="openstack/memcached-0" Jan 07 03:48:56 crc kubenswrapper[4980]: I0107 03:48:56.185744 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f69g4\" (UniqueName: \"kubernetes.io/projected/cf13ed1a-99f7-4574-a18a-7e559c48ddaa-kube-api-access-f69g4\") pod \"memcached-0\" (UID: \"cf13ed1a-99f7-4574-a18a-7e559c48ddaa\") " pod="openstack/memcached-0" Jan 07 03:48:56 crc kubenswrapper[4980]: I0107 03:48:56.185801 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf13ed1a-99f7-4574-a18a-7e559c48ddaa-memcached-tls-certs\") pod \"memcached-0\" (UID: \"cf13ed1a-99f7-4574-a18a-7e559c48ddaa\") " pod="openstack/memcached-0" Jan 07 03:48:56 crc kubenswrapper[4980]: I0107 03:48:56.265706 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 07 03:48:56 crc kubenswrapper[4980]: I0107 03:48:56.631093 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" event={"ID":"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9","Type":"ContainerStarted","Data":"9e8db1b39dbfddbee3abc86deffd3d0b60331f8bf3291e64834bd38472009dbb"} Jan 07 03:48:58 crc kubenswrapper[4980]: I0107 03:48:58.024389 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 07 03:48:58 crc kubenswrapper[4980]: I0107 03:48:58.025601 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 07 03:48:58 crc kubenswrapper[4980]: I0107 03:48:58.028520 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-pffxr" Jan 07 03:48:58 crc kubenswrapper[4980]: I0107 03:48:58.035592 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 07 03:48:58 crc kubenswrapper[4980]: I0107 03:48:58.203032 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggfh9\" (UniqueName: \"kubernetes.io/projected/f3fa7e62-6ab5-4edb-9311-9b49a85c766b-kube-api-access-ggfh9\") pod \"kube-state-metrics-0\" (UID: \"f3fa7e62-6ab5-4edb-9311-9b49a85c766b\") " pod="openstack/kube-state-metrics-0" Jan 07 03:48:58 crc kubenswrapper[4980]: I0107 03:48:58.304384 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggfh9\" (UniqueName: \"kubernetes.io/projected/f3fa7e62-6ab5-4edb-9311-9b49a85c766b-kube-api-access-ggfh9\") pod \"kube-state-metrics-0\" (UID: \"f3fa7e62-6ab5-4edb-9311-9b49a85c766b\") " pod="openstack/kube-state-metrics-0" Jan 07 03:48:58 crc kubenswrapper[4980]: I0107 03:48:58.342067 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggfh9\" (UniqueName: \"kubernetes.io/projected/f3fa7e62-6ab5-4edb-9311-9b49a85c766b-kube-api-access-ggfh9\") pod \"kube-state-metrics-0\" (UID: \"f3fa7e62-6ab5-4edb-9311-9b49a85c766b\") " pod="openstack/kube-state-metrics-0" Jan 07 03:48:58 crc kubenswrapper[4980]: I0107 03:48:58.345300 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.821481 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-94rwj"] Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.823796 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-94rwj" Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.830641 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.830842 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-7ddst" Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.831041 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.832068 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-94rwj"] Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.841533 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-4nfg5"] Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.849835 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.860046 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4nfg5"] Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.968998 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-var-run-ovn\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.969411 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b80c3b0-701f-4616-b851-c954a9421bf6-scripts\") pod \"ovn-controller-ovs-4nfg5\" (UID: \"2b80c3b0-701f-4616-b851-c954a9421bf6\") " pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.969466 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-var-log-ovn\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.969496 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2b80c3b0-701f-4616-b851-c954a9421bf6-var-log\") pod \"ovn-controller-ovs-4nfg5\" (UID: \"2b80c3b0-701f-4616-b851-c954a9421bf6\") " pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.969535 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2b80c3b0-701f-4616-b851-c954a9421bf6-var-lib\") pod \"ovn-controller-ovs-4nfg5\" (UID: \"2b80c3b0-701f-4616-b851-c954a9421bf6\") " pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.969584 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2b80c3b0-701f-4616-b851-c954a9421bf6-etc-ovs\") pod \"ovn-controller-ovs-4nfg5\" (UID: \"2b80c3b0-701f-4616-b851-c954a9421bf6\") " pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.969923 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-combined-ca-bundle\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.970183 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-var-run\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.970458 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvvgq\" (UniqueName: \"kubernetes.io/projected/2b80c3b0-701f-4616-b851-c954a9421bf6-kube-api-access-pvvgq\") pod \"ovn-controller-ovs-4nfg5\" (UID: \"2b80c3b0-701f-4616-b851-c954a9421bf6\") " pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.970602 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-ovn-controller-tls-certs\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.970674 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-scripts\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.970722 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c545t\" (UniqueName: \"kubernetes.io/projected/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-kube-api-access-c545t\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:01 crc kubenswrapper[4980]: I0107 03:49:01.970975 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2b80c3b0-701f-4616-b851-c954a9421bf6-var-run\") pod \"ovn-controller-ovs-4nfg5\" (UID: \"2b80c3b0-701f-4616-b851-c954a9421bf6\") " pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.073369 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-combined-ca-bundle\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.073514 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-var-run\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.074503 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-var-run\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.074640 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvvgq\" (UniqueName: \"kubernetes.io/projected/2b80c3b0-701f-4616-b851-c954a9421bf6-kube-api-access-pvvgq\") pod \"ovn-controller-ovs-4nfg5\" (UID: \"2b80c3b0-701f-4616-b851-c954a9421bf6\") " pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.075143 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-ovn-controller-tls-certs\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.075214 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-scripts\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.075256 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c545t\" (UniqueName: \"kubernetes.io/projected/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-kube-api-access-c545t\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.083478 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-scripts\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.084097 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2b80c3b0-701f-4616-b851-c954a9421bf6-var-run\") pod \"ovn-controller-ovs-4nfg5\" (UID: \"2b80c3b0-701f-4616-b851-c954a9421bf6\") " pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.084277 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2b80c3b0-701f-4616-b851-c954a9421bf6-var-run\") pod \"ovn-controller-ovs-4nfg5\" (UID: \"2b80c3b0-701f-4616-b851-c954a9421bf6\") " pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.084394 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-var-run-ovn\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.084536 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-combined-ca-bundle\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.084639 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-var-run-ovn\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.085033 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b80c3b0-701f-4616-b851-c954a9421bf6-scripts\") pod \"ovn-controller-ovs-4nfg5\" (UID: \"2b80c3b0-701f-4616-b851-c954a9421bf6\") " pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.085444 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-var-log-ovn\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.085624 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2b80c3b0-701f-4616-b851-c954a9421bf6-var-log\") pod \"ovn-controller-ovs-4nfg5\" (UID: \"2b80c3b0-701f-4616-b851-c954a9421bf6\") " pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.085685 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-var-log-ovn\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.085953 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2b80c3b0-701f-4616-b851-c954a9421bf6-var-log\") pod \"ovn-controller-ovs-4nfg5\" (UID: \"2b80c3b0-701f-4616-b851-c954a9421bf6\") " pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.086113 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2b80c3b0-701f-4616-b851-c954a9421bf6-var-lib\") pod \"ovn-controller-ovs-4nfg5\" (UID: \"2b80c3b0-701f-4616-b851-c954a9421bf6\") " pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.086536 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2b80c3b0-701f-4616-b851-c954a9421bf6-var-lib\") pod \"ovn-controller-ovs-4nfg5\" (UID: \"2b80c3b0-701f-4616-b851-c954a9421bf6\") " pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.086938 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b80c3b0-701f-4616-b851-c954a9421bf6-scripts\") pod \"ovn-controller-ovs-4nfg5\" (UID: \"2b80c3b0-701f-4616-b851-c954a9421bf6\") " pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.087008 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2b80c3b0-701f-4616-b851-c954a9421bf6-etc-ovs\") pod \"ovn-controller-ovs-4nfg5\" (UID: \"2b80c3b0-701f-4616-b851-c954a9421bf6\") " pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.087150 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2b80c3b0-701f-4616-b851-c954a9421bf6-etc-ovs\") pod \"ovn-controller-ovs-4nfg5\" (UID: \"2b80c3b0-701f-4616-b851-c954a9421bf6\") " pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.091821 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-ovn-controller-tls-certs\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.092236 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c545t\" (UniqueName: \"kubernetes.io/projected/4567269f-c5aa-44a8-8e68-c0dc01c2b55c-kube-api-access-c545t\") pod \"ovn-controller-94rwj\" (UID: \"4567269f-c5aa-44a8-8e68-c0dc01c2b55c\") " pod="openstack/ovn-controller-94rwj" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.107288 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvvgq\" (UniqueName: \"kubernetes.io/projected/2b80c3b0-701f-4616-b851-c954a9421bf6-kube-api-access-pvvgq\") pod \"ovn-controller-ovs-4nfg5\" (UID: \"2b80c3b0-701f-4616-b851-c954a9421bf6\") " pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.142173 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-94rwj" Jan 07 03:49:02 crc kubenswrapper[4980]: I0107 03:49:02.180086 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:04 crc kubenswrapper[4980]: I0107 03:49:04.964567 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 07 03:49:04 crc kubenswrapper[4980]: I0107 03:49:04.966760 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:04 crc kubenswrapper[4980]: I0107 03:49:04.969833 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 07 03:49:04 crc kubenswrapper[4980]: I0107 03:49:04.970331 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-q9x49" Jan 07 03:49:04 crc kubenswrapper[4980]: I0107 03:49:04.971418 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 07 03:49:04 crc kubenswrapper[4980]: I0107 03:49:04.971937 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 07 03:49:04 crc kubenswrapper[4980]: I0107 03:49:04.975061 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 07 03:49:04 crc kubenswrapper[4980]: I0107 03:49:04.981860 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.036877 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.037137 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-config\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.037222 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.037295 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.037394 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92qhz\" (UniqueName: \"kubernetes.io/projected/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-kube-api-access-92qhz\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.037478 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.037569 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.037657 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.145836 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.145930 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.145978 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-config\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.146002 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.146029 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.146095 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92qhz\" (UniqueName: \"kubernetes.io/projected/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-kube-api-access-92qhz\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.146129 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.146158 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.146760 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.147972 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.148314 4980 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.149748 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.157190 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.162862 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.164712 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-config\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.165464 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-4dlpk" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.166439 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.167136 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.167471 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.171224 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.173291 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92qhz\" (UniqueName: \"kubernetes.io/projected/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-kube-api-access-92qhz\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.175052 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.179068 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.185850 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc84f69f-9bab-40e5-80a8-75266ef8f4b7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bc84f69f-9bab-40e5-80a8-75266ef8f4b7\") " pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.248031 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808ebed8-cef0-4938-9ad2-64f28d9c8af2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.248085 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/808ebed8-cef0-4938-9ad2-64f28d9c8af2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.248172 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/808ebed8-cef0-4938-9ad2-64f28d9c8af2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.248204 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6qzt\" (UniqueName: \"kubernetes.io/projected/808ebed8-cef0-4938-9ad2-64f28d9c8af2-kube-api-access-m6qzt\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.248251 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.248286 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/808ebed8-cef0-4938-9ad2-64f28d9c8af2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.248307 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/808ebed8-cef0-4938-9ad2-64f28d9c8af2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.248327 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/808ebed8-cef0-4938-9ad2-64f28d9c8af2-config\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.308517 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.350340 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808ebed8-cef0-4938-9ad2-64f28d9c8af2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.350649 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/808ebed8-cef0-4938-9ad2-64f28d9c8af2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.350759 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/808ebed8-cef0-4938-9ad2-64f28d9c8af2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.350879 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6qzt\" (UniqueName: \"kubernetes.io/projected/808ebed8-cef0-4938-9ad2-64f28d9c8af2-kube-api-access-m6qzt\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.350970 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.351059 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/808ebed8-cef0-4938-9ad2-64f28d9c8af2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.351125 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/808ebed8-cef0-4938-9ad2-64f28d9c8af2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.351199 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/808ebed8-cef0-4938-9ad2-64f28d9c8af2-config\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.351717 4980 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.352175 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/808ebed8-cef0-4938-9ad2-64f28d9c8af2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.352897 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/808ebed8-cef0-4938-9ad2-64f28d9c8af2-config\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.354067 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/808ebed8-cef0-4938-9ad2-64f28d9c8af2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.355843 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/808ebed8-cef0-4938-9ad2-64f28d9c8af2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.356134 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808ebed8-cef0-4938-9ad2-64f28d9c8af2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.356482 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/808ebed8-cef0-4938-9ad2-64f28d9c8af2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.382793 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6qzt\" (UniqueName: \"kubernetes.io/projected/808ebed8-cef0-4938-9ad2-64f28d9c8af2-kube-api-access-m6qzt\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.384332 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"808ebed8-cef0-4938-9ad2-64f28d9c8af2\") " pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:05 crc kubenswrapper[4980]: I0107 03:49:05.542138 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:07 crc kubenswrapper[4980]: E0107 03:49:07.620833 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 07 03:49:07 crc kubenswrapper[4980]: E0107 03:49:07.621498 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8wfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-d4kk7_openstack(38f33686-ed85-4099-ab5e-cf9f02eada1e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 07 03:49:07 crc kubenswrapper[4980]: E0107 03:49:07.622643 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-d4kk7" podUID="38f33686-ed85-4099-ab5e-cf9f02eada1e" Jan 07 03:49:07 crc kubenswrapper[4980]: E0107 03:49:07.694057 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 07 03:49:07 crc kubenswrapper[4980]: E0107 03:49:07.694236 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gnhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-k7q26_openstack(9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 07 03:49:07 crc kubenswrapper[4980]: E0107 03:49:07.696149 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-k7q26" podUID="9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7" Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.085938 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 07 03:49:08 crc kubenswrapper[4980]: W0107 03:49:08.100358 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26440bb2_233e_47e3_bb46_9122523bce68.slice/crio-308772a2d63acb0eebd2588e53d0393d5cebdb5865a62d625e6c843906b35a25 WatchSource:0}: Error finding container 308772a2d63acb0eebd2588e53d0393d5cebdb5865a62d625e6c843906b35a25: Status 404 returned error can't find the container with id 308772a2d63acb0eebd2588e53d0393d5cebdb5865a62d625e6c843906b35a25 Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.290480 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k7q26" Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.302315 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d4kk7" Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.436585 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7-dns-svc\") pod \"9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7\" (UID: \"9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7\") " Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.436700 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38f33686-ed85-4099-ab5e-cf9f02eada1e-config\") pod \"38f33686-ed85-4099-ab5e-cf9f02eada1e\" (UID: \"38f33686-ed85-4099-ab5e-cf9f02eada1e\") " Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.436742 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8wfd\" (UniqueName: \"kubernetes.io/projected/38f33686-ed85-4099-ab5e-cf9f02eada1e-kube-api-access-l8wfd\") pod \"38f33686-ed85-4099-ab5e-cf9f02eada1e\" (UID: \"38f33686-ed85-4099-ab5e-cf9f02eada1e\") " Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.436796 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gnhd\" (UniqueName: \"kubernetes.io/projected/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7-kube-api-access-7gnhd\") pod \"9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7\" (UID: \"9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7\") " Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.436828 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7-config\") pod \"9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7\" (UID: \"9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7\") " Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.437182 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7" (UID: "9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.437587 4980 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.438016 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38f33686-ed85-4099-ab5e-cf9f02eada1e-config" (OuterVolumeSpecName: "config") pod "38f33686-ed85-4099-ab5e-cf9f02eada1e" (UID: "38f33686-ed85-4099-ab5e-cf9f02eada1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.438760 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7-config" (OuterVolumeSpecName: "config") pod "9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7" (UID: "9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.446810 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38f33686-ed85-4099-ab5e-cf9f02eada1e-kube-api-access-l8wfd" (OuterVolumeSpecName: "kube-api-access-l8wfd") pod "38f33686-ed85-4099-ab5e-cf9f02eada1e" (UID: "38f33686-ed85-4099-ab5e-cf9f02eada1e"). InnerVolumeSpecName "kube-api-access-l8wfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.446905 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7-kube-api-access-7gnhd" (OuterVolumeSpecName: "kube-api-access-7gnhd") pod "9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7" (UID: "9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7"). InnerVolumeSpecName "kube-api-access-7gnhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.539617 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38f33686-ed85-4099-ab5e-cf9f02eada1e-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.539672 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8wfd\" (UniqueName: \"kubernetes.io/projected/38f33686-ed85-4099-ab5e-cf9f02eada1e-kube-api-access-l8wfd\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.539690 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gnhd\" (UniqueName: \"kubernetes.io/projected/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7-kube-api-access-7gnhd\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.539705 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.645862 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.656811 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.684801 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 07 03:49:08 crc kubenswrapper[4980]: W0107 03:49:08.687423 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4699a3c_f9f1_4c80_93fb_dcb3b9e852b2.slice/crio-46b91c580251736f70f6546420ba7d1c97635c856e7605d91dd5512900d1334e WatchSource:0}: Error finding container 46b91c580251736f70f6546420ba7d1c97635c856e7605d91dd5512900d1334e: Status 404 returned error can't find the container with id 46b91c580251736f70f6546420ba7d1c97635c856e7605d91dd5512900d1334e Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.696954 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.744972 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-94rwj"] Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.759616 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" event={"ID":"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9","Type":"ContainerStarted","Data":"447d7418c016dd2aa54977cb737993aaa434ee967876ff9299e34cb033a1b509"} Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.765835 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3336b2a3-f175-44d1-9771-adabe71eea6c","Type":"ContainerStarted","Data":"96429e5e04b769249c37467c7d48950d3ad46c2b4f787aa1d418962e6a379fd3"} Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.770898 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-k7q26" event={"ID":"9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7","Type":"ContainerDied","Data":"a536951031a425d9406bd88660c46e653e725364335c8f9f84c3a97388b0715d"} Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.771030 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k7q26" Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.775809 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-d4kk7" event={"ID":"38f33686-ed85-4099-ab5e-cf9f02eada1e","Type":"ContainerDied","Data":"92073ab7d5fe25e16f416da38b5ce504070e124df06d14948e20f3ad29168b6e"} Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.775917 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d4kk7" Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.781037 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.781813 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"26440bb2-233e-47e3-bb46-9122523bce68","Type":"ContainerStarted","Data":"308772a2d63acb0eebd2588e53d0393d5cebdb5865a62d625e6c843906b35a25"} Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.785378 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2","Type":"ContainerStarted","Data":"46b91c580251736f70f6546420ba7d1c97635c856e7605d91dd5512900d1334e"} Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.861245 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d4kk7"] Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.866441 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d4kk7"] Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.871018 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.880167 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k7q26"] Jan 07 03:49:08 crc kubenswrapper[4980]: I0107 03:49:08.884373 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k7q26"] Jan 07 03:49:08 crc kubenswrapper[4980]: E0107 03:49:08.975869 4980 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ce5b0cf_5131_4e84_bfde_a5b87b09a8f7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38f33686_ed85_4099_ab5e_cf9f02eada1e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38f33686_ed85_4099_ab5e_cf9f02eada1e.slice/crio-92073ab7d5fe25e16f416da38b5ce504070e124df06d14948e20f3ad29168b6e\": RecentStats: unable to find data in memory cache]" Jan 07 03:49:09 crc kubenswrapper[4980]: I0107 03:49:09.386648 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 07 03:49:09 crc kubenswrapper[4980]: I0107 03:49:09.627216 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4nfg5"] Jan 07 03:49:09 crc kubenswrapper[4980]: I0107 03:49:09.752920 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38f33686-ed85-4099-ab5e-cf9f02eada1e" path="/var/lib/kubelet/pods/38f33686-ed85-4099-ab5e-cf9f02eada1e/volumes" Jan 07 03:49:09 crc kubenswrapper[4980]: I0107 03:49:09.753333 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7" path="/var/lib/kubelet/pods/9ce5b0cf-5131-4e84-bfde-a5b87b09a8f7/volumes" Jan 07 03:49:09 crc kubenswrapper[4980]: I0107 03:49:09.797883 4980 generic.go:334] "Generic (PLEG): container finished" podID="0534e35c-7b58-476b-ba1d-a8b6d91cbcb9" containerID="447d7418c016dd2aa54977cb737993aaa434ee967876ff9299e34cb033a1b509" exitCode=0 Jan 07 03:49:09 crc kubenswrapper[4980]: I0107 03:49:09.797999 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" event={"ID":"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9","Type":"ContainerDied","Data":"447d7418c016dd2aa54977cb737993aaa434ee967876ff9299e34cb033a1b509"} Jan 07 03:49:10 crc kubenswrapper[4980]: W0107 03:49:10.774988 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6714f510_9927_47da_bc8b_3e4a3995cdc6.slice/crio-bb718c083108e3eea538680a1e5b66e6e69c78c280d9bb547747622a1fc8dcef WatchSource:0}: Error finding container bb718c083108e3eea538680a1e5b66e6e69c78c280d9bb547747622a1fc8dcef: Status 404 returned error can't find the container with id bb718c083108e3eea538680a1e5b66e6e69c78c280d9bb547747622a1fc8dcef Jan 07 03:49:10 crc kubenswrapper[4980]: W0107 03:49:10.781792 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3fa7e62_6ab5_4edb_9311_9b49a85c766b.slice/crio-7e89d93e568ddeceee7f8c2f95a9f69f7b003e235b53dc1c7926c1ca33fc0cc7 WatchSource:0}: Error finding container 7e89d93e568ddeceee7f8c2f95a9f69f7b003e235b53dc1c7926c1ca33fc0cc7: Status 404 returned error can't find the container with id 7e89d93e568ddeceee7f8c2f95a9f69f7b003e235b53dc1c7926c1ca33fc0cc7 Jan 07 03:49:10 crc kubenswrapper[4980]: W0107 03:49:10.782524 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc84f69f_9bab_40e5_80a8_75266ef8f4b7.slice/crio-e7302d670898b07e7fce6cb3b4950ef3558ac0be5484239f89275a9a1a86caef WatchSource:0}: Error finding container e7302d670898b07e7fce6cb3b4950ef3558ac0be5484239f89275a9a1a86caef: Status 404 returned error can't find the container with id e7302d670898b07e7fce6cb3b4950ef3558ac0be5484239f89275a9a1a86caef Jan 07 03:49:10 crc kubenswrapper[4980]: W0107 03:49:10.786202 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b80c3b0_701f_4616_b851_c954a9421bf6.slice/crio-05b4cf0d04f62cc39967fa3920ebd67e1638a99de4c5a2f7c6b627100f5d9e0f WatchSource:0}: Error finding container 05b4cf0d04f62cc39967fa3920ebd67e1638a99de4c5a2f7c6b627100f5d9e0f: Status 404 returned error can't find the container with id 05b4cf0d04f62cc39967fa3920ebd67e1638a99de4c5a2f7c6b627100f5d9e0f Jan 07 03:49:10 crc kubenswrapper[4980]: I0107 03:49:10.810439 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6714f510-9927-47da-bc8b-3e4a3995cdc6","Type":"ContainerStarted","Data":"bb718c083108e3eea538680a1e5b66e6e69c78c280d9bb547747622a1fc8dcef"} Jan 07 03:49:10 crc kubenswrapper[4980]: I0107 03:49:10.812521 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-94rwj" event={"ID":"4567269f-c5aa-44a8-8e68-c0dc01c2b55c","Type":"ContainerStarted","Data":"76c5a9810ba53927588fc9cdb51d0f3a699c4f643e68eb1597891b5ab3699a8a"} Jan 07 03:49:10 crc kubenswrapper[4980]: I0107 03:49:10.814612 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bc84f69f-9bab-40e5-80a8-75266ef8f4b7","Type":"ContainerStarted","Data":"e7302d670898b07e7fce6cb3b4950ef3558ac0be5484239f89275a9a1a86caef"} Jan 07 03:49:10 crc kubenswrapper[4980]: I0107 03:49:10.821331 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"cf13ed1a-99f7-4574-a18a-7e559c48ddaa","Type":"ContainerStarted","Data":"bd3c402f6068b047f7e0ead2e4d8a8cacd4aee6ba1e2bb35e1133b3cc59ae359"} Jan 07 03:49:10 crc kubenswrapper[4980]: I0107 03:49:10.824315 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4nfg5" event={"ID":"2b80c3b0-701f-4616-b851-c954a9421bf6","Type":"ContainerStarted","Data":"05b4cf0d04f62cc39967fa3920ebd67e1638a99de4c5a2f7c6b627100f5d9e0f"} Jan 07 03:49:10 crc kubenswrapper[4980]: I0107 03:49:10.828609 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"808ebed8-cef0-4938-9ad2-64f28d9c8af2","Type":"ContainerStarted","Data":"22ecdea4c2d091275713f9df7dc41fe5725eacf1ae572dacde8cd016552b153f"} Jan 07 03:49:10 crc kubenswrapper[4980]: I0107 03:49:10.831204 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f3fa7e62-6ab5-4edb-9311-9b49a85c766b","Type":"ContainerStarted","Data":"7e89d93e568ddeceee7f8c2f95a9f69f7b003e235b53dc1c7926c1ca33fc0cc7"} Jan 07 03:49:11 crc kubenswrapper[4980]: I0107 03:49:11.844997 4980 generic.go:334] "Generic (PLEG): container finished" podID="55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2" containerID="4e0b97bf5301c495e50ec237a495e868453b27dd575f7266ea9eea3c773c9b57" exitCode=0 Jan 07 03:49:11 crc kubenswrapper[4980]: I0107 03:49:11.845141 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" event={"ID":"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2","Type":"ContainerDied","Data":"4e0b97bf5301c495e50ec237a495e868453b27dd575f7266ea9eea3c773c9b57"} Jan 07 03:49:11 crc kubenswrapper[4980]: I0107 03:49:11.850698 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" event={"ID":"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9","Type":"ContainerStarted","Data":"28d28f3feea5b8d71496e84bd2b62dbb77529a1c3606839d617eaef94096dbc2"} Jan 07 03:49:11 crc kubenswrapper[4980]: I0107 03:49:11.851104 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" Jan 07 03:49:11 crc kubenswrapper[4980]: I0107 03:49:11.900341 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" podStartSLOduration=9.021719405 podStartE2EDuration="20.900318353s" podCreationTimestamp="2026-01-07 03:48:51 +0000 UTC" firstStartedPulling="2026-01-07 03:48:55.976924915 +0000 UTC m=+982.542619730" lastFinishedPulling="2026-01-07 03:49:07.855523943 +0000 UTC m=+994.421218678" observedRunningTime="2026-01-07 03:49:11.88870486 +0000 UTC m=+998.454399665" watchObservedRunningTime="2026-01-07 03:49:11.900318353 +0000 UTC m=+998.466013098" Jan 07 03:49:12 crc kubenswrapper[4980]: I0107 03:49:12.861688 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" event={"ID":"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2","Type":"ContainerStarted","Data":"eba7a286d8a02ad6862a3584602c83744854665caac91111e303538ffa4d7a6c"} Jan 07 03:49:12 crc kubenswrapper[4980]: I0107 03:49:12.861804 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" Jan 07 03:49:12 crc kubenswrapper[4980]: I0107 03:49:12.883193 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" podStartSLOduration=3.32141947 podStartE2EDuration="21.883178123s" podCreationTimestamp="2026-01-07 03:48:51 +0000 UTC" firstStartedPulling="2026-01-07 03:48:52.260683486 +0000 UTC m=+978.826378221" lastFinishedPulling="2026-01-07 03:49:10.822442129 +0000 UTC m=+997.388136874" observedRunningTime="2026-01-07 03:49:12.87969035 +0000 UTC m=+999.445385085" watchObservedRunningTime="2026-01-07 03:49:12.883178123 +0000 UTC m=+999.448872858" Jan 07 03:49:16 crc kubenswrapper[4980]: I0107 03:49:16.979806 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" Jan 07 03:49:17 crc kubenswrapper[4980]: I0107 03:49:17.032448 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bnxpz"] Jan 07 03:49:17 crc kubenswrapper[4980]: I0107 03:49:17.032740 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" podUID="55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2" containerName="dnsmasq-dns" containerID="cri-o://eba7a286d8a02ad6862a3584602c83744854665caac91111e303538ffa4d7a6c" gracePeriod=10 Jan 07 03:49:17 crc kubenswrapper[4980]: I0107 03:49:17.036096 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" Jan 07 03:49:17 crc kubenswrapper[4980]: I0107 03:49:17.904050 4980 generic.go:334] "Generic (PLEG): container finished" podID="55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2" containerID="eba7a286d8a02ad6862a3584602c83744854665caac91111e303538ffa4d7a6c" exitCode=0 Jan 07 03:49:17 crc kubenswrapper[4980]: I0107 03:49:17.904089 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" event={"ID":"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2","Type":"ContainerDied","Data":"eba7a286d8a02ad6862a3584602c83744854665caac91111e303538ffa4d7a6c"} Jan 07 03:49:21 crc kubenswrapper[4980]: I0107 03:49:21.733614 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" podUID="55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.96:5353: connect: connection refused" Jan 07 03:49:23 crc kubenswrapper[4980]: E0107 03:49:23.524541 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Jan 07 03:49:23 crc kubenswrapper[4980]: E0107 03:49:23.524753 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5fdh6dh8h658h658h59ch677h578hb4h57bh7fh6dh99hf5h5d8h699h586hf4h5h86hf7h549h5cbh5d4h656h648hdfh586h65h647h5bbh5dbq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92qhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(bc84f69f-9bab-40e5-80a8-75266ef8f4b7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 07 03:49:24 crc kubenswrapper[4980]: I0107 03:49:24.130350 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" Jan 07 03:49:24 crc kubenswrapper[4980]: I0107 03:49:24.237255 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtlmr\" (UniqueName: \"kubernetes.io/projected/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2-kube-api-access-mtlmr\") pod \"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2\" (UID: \"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2\") " Jan 07 03:49:24 crc kubenswrapper[4980]: I0107 03:49:24.237375 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2-dns-svc\") pod \"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2\" (UID: \"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2\") " Jan 07 03:49:24 crc kubenswrapper[4980]: I0107 03:49:24.237677 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2-config\") pod \"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2\" (UID: \"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2\") " Jan 07 03:49:24 crc kubenswrapper[4980]: I0107 03:49:24.245295 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2-kube-api-access-mtlmr" (OuterVolumeSpecName: "kube-api-access-mtlmr") pod "55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2" (UID: "55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2"). InnerVolumeSpecName "kube-api-access-mtlmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:49:24 crc kubenswrapper[4980]: I0107 03:49:24.283105 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2" (UID: "55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:24 crc kubenswrapper[4980]: I0107 03:49:24.285510 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2-config" (OuterVolumeSpecName: "config") pod "55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2" (UID: "55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:24 crc kubenswrapper[4980]: I0107 03:49:24.340233 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:24 crc kubenswrapper[4980]: I0107 03:49:24.340302 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtlmr\" (UniqueName: \"kubernetes.io/projected/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2-kube-api-access-mtlmr\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:24 crc kubenswrapper[4980]: I0107 03:49:24.340335 4980 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:24 crc kubenswrapper[4980]: I0107 03:49:24.976850 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" event={"ID":"55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2","Type":"ContainerDied","Data":"1ad6f31d6afff09d8457b24eb343cd449d2172d79c1a4470b1099972ade2ff0c"} Jan 07 03:49:24 crc kubenswrapper[4980]: I0107 03:49:24.977006 4980 scope.go:117] "RemoveContainer" containerID="eba7a286d8a02ad6862a3584602c83744854665caac91111e303538ffa4d7a6c" Jan 07 03:49:24 crc kubenswrapper[4980]: I0107 03:49:24.977349 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bnxpz" Jan 07 03:49:25 crc kubenswrapper[4980]: I0107 03:49:25.099686 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bnxpz"] Jan 07 03:49:25 crc kubenswrapper[4980]: I0107 03:49:25.107205 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bnxpz"] Jan 07 03:49:25 crc kubenswrapper[4980]: I0107 03:49:25.415504 4980 scope.go:117] "RemoveContainer" containerID="4e0b97bf5301c495e50ec237a495e868453b27dd575f7266ea9eea3c773c9b57" Jan 07 03:49:25 crc kubenswrapper[4980]: I0107 03:49:25.748463 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2" path="/var/lib/kubelet/pods/55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2/volumes" Jan 07 03:49:26 crc kubenswrapper[4980]: E0107 03:49:26.343672 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 07 03:49:26 crc kubenswrapper[4980]: E0107 03:49:26.343736 4980 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 07 03:49:26 crc kubenswrapper[4980]: E0107 03:49:26.343885 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ggfh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(f3fa7e62-6ab5-4edb-9311-9b49a85c766b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 07 03:49:26 crc kubenswrapper[4980]: E0107 03:49:26.345130 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="f3fa7e62-6ab5-4edb-9311-9b49a85c766b" Jan 07 03:49:27 crc kubenswrapper[4980]: E0107 03:49:27.002141 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="f3fa7e62-6ab5-4edb-9311-9b49a85c766b" Jan 07 03:49:28 crc kubenswrapper[4980]: E0107 03:49:28.701628 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="bc84f69f-9bab-40e5-80a8-75266ef8f4b7" Jan 07 03:49:29 crc kubenswrapper[4980]: I0107 03:49:29.026209 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4nfg5" event={"ID":"2b80c3b0-701f-4616-b851-c954a9421bf6","Type":"ContainerStarted","Data":"8af37f116f510de311d570077674dc1263ff88c917bc8bc6fe0afb2b9e3364ec"} Jan 07 03:49:29 crc kubenswrapper[4980]: I0107 03:49:29.027881 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3336b2a3-f175-44d1-9771-adabe71eea6c","Type":"ContainerStarted","Data":"8ce3d729d523a6d8e0db02d381a802e8a91f0414782ff3aee5cfd80100a293f6"} Jan 07 03:49:29 crc kubenswrapper[4980]: I0107 03:49:29.030174 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"808ebed8-cef0-4938-9ad2-64f28d9c8af2","Type":"ContainerStarted","Data":"c7a32405dc6148fecd86490f9b60f8d9dbdd23b1d09734b1b1d1032e2a22303e"} Jan 07 03:49:29 crc kubenswrapper[4980]: I0107 03:49:29.031686 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-94rwj" event={"ID":"4567269f-c5aa-44a8-8e68-c0dc01c2b55c","Type":"ContainerStarted","Data":"a167a033bb2d6ed2dd75c71d1787367c8c4654fa17ca74e20ae07e102f9f2b82"} Jan 07 03:49:29 crc kubenswrapper[4980]: I0107 03:49:29.031859 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-94rwj" Jan 07 03:49:29 crc kubenswrapper[4980]: I0107 03:49:29.033239 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bc84f69f-9bab-40e5-80a8-75266ef8f4b7","Type":"ContainerStarted","Data":"58c44057aa385cd25191bfdea7a2b3e0c3c5960caf6e49ed4aed0a7b0738aa18"} Jan 07 03:49:29 crc kubenswrapper[4980]: I0107 03:49:29.034644 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"cf13ed1a-99f7-4574-a18a-7e559c48ddaa","Type":"ContainerStarted","Data":"ec7a21a948eb4515284b15254741f7ebc96a46df657bbba2ff78d1bdf10b338d"} Jan 07 03:49:29 crc kubenswrapper[4980]: E0107 03:49:29.034885 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="bc84f69f-9bab-40e5-80a8-75266ef8f4b7" Jan 07 03:49:29 crc kubenswrapper[4980]: I0107 03:49:29.034918 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 07 03:49:29 crc kubenswrapper[4980]: I0107 03:49:29.090304 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-94rwj" podStartSLOduration=13.920162943 podStartE2EDuration="28.090285565s" podCreationTimestamp="2026-01-07 03:49:01 +0000 UTC" firstStartedPulling="2026-01-07 03:49:10.813304526 +0000 UTC m=+997.378999301" lastFinishedPulling="2026-01-07 03:49:24.983427158 +0000 UTC m=+1011.549121923" observedRunningTime="2026-01-07 03:49:29.085911685 +0000 UTC m=+1015.651606420" watchObservedRunningTime="2026-01-07 03:49:29.090285565 +0000 UTC m=+1015.655980310" Jan 07 03:49:29 crc kubenswrapper[4980]: I0107 03:49:29.169496 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=21.476470749 podStartE2EDuration="34.169473041s" podCreationTimestamp="2026-01-07 03:48:55 +0000 UTC" firstStartedPulling="2026-01-07 03:49:10.813843873 +0000 UTC m=+997.379538648" lastFinishedPulling="2026-01-07 03:49:23.506846195 +0000 UTC m=+1010.072540940" observedRunningTime="2026-01-07 03:49:29.166043081 +0000 UTC m=+1015.731737816" watchObservedRunningTime="2026-01-07 03:49:29.169473041 +0000 UTC m=+1015.735167776" Jan 07 03:49:30 crc kubenswrapper[4980]: I0107 03:49:30.042956 4980 generic.go:334] "Generic (PLEG): container finished" podID="2b80c3b0-701f-4616-b851-c954a9421bf6" containerID="8af37f116f510de311d570077674dc1263ff88c917bc8bc6fe0afb2b9e3364ec" exitCode=0 Jan 07 03:49:30 crc kubenswrapper[4980]: I0107 03:49:30.043027 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4nfg5" event={"ID":"2b80c3b0-701f-4616-b851-c954a9421bf6","Type":"ContainerDied","Data":"8af37f116f510de311d570077674dc1263ff88c917bc8bc6fe0afb2b9e3364ec"} Jan 07 03:49:30 crc kubenswrapper[4980]: I0107 03:49:30.049897 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"808ebed8-cef0-4938-9ad2-64f28d9c8af2","Type":"ContainerStarted","Data":"13568ce7d19492345b0e9693517388e2b5928934b8da9e90a4b065dbdbf1ded9"} Jan 07 03:49:30 crc kubenswrapper[4980]: I0107 03:49:30.051909 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"26440bb2-233e-47e3-bb46-9122523bce68","Type":"ContainerStarted","Data":"16a10cc63fe3737551a1d1c1d38097ad76d70f15551bca8e701414288ead90f6"} Jan 07 03:49:30 crc kubenswrapper[4980]: I0107 03:49:30.053687 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2","Type":"ContainerStarted","Data":"ddc13980e39ded47b88c0338d62a55470d35d70b092095c388cabc25032cd543"} Jan 07 03:49:30 crc kubenswrapper[4980]: I0107 03:49:30.055710 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6714f510-9927-47da-bc8b-3e4a3995cdc6","Type":"ContainerStarted","Data":"ecbe432cc3de5ef2366a992d17d5ea48d3dc8f104563979f64f98a2a80743205"} Jan 07 03:49:30 crc kubenswrapper[4980]: E0107 03:49:30.074240 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="bc84f69f-9bab-40e5-80a8-75266ef8f4b7" Jan 07 03:49:30 crc kubenswrapper[4980]: I0107 03:49:30.125818 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.685086495 podStartE2EDuration="26.125795646s" podCreationTimestamp="2026-01-07 03:49:04 +0000 UTC" firstStartedPulling="2026-01-07 03:49:10.826273253 +0000 UTC m=+997.391968028" lastFinishedPulling="2026-01-07 03:49:24.266982424 +0000 UTC m=+1010.832677179" observedRunningTime="2026-01-07 03:49:30.122745718 +0000 UTC m=+1016.688440463" watchObservedRunningTime="2026-01-07 03:49:30.125795646 +0000 UTC m=+1016.691490381" Jan 07 03:49:30 crc kubenswrapper[4980]: I0107 03:49:30.542372 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:31 crc kubenswrapper[4980]: I0107 03:49:31.064492 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4nfg5" event={"ID":"2b80c3b0-701f-4616-b851-c954a9421bf6","Type":"ContainerStarted","Data":"8c7caa5ca257ebdac43463035bb3528da38bd6b612c8795085694419bccb32ac"} Jan 07 03:49:31 crc kubenswrapper[4980]: I0107 03:49:31.064915 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4nfg5" event={"ID":"2b80c3b0-701f-4616-b851-c954a9421bf6","Type":"ContainerStarted","Data":"8eef91259073a64575b269b03fc7da93ce0d270308f7a25c8e488ba5df16a163"} Jan 07 03:49:31 crc kubenswrapper[4980]: I0107 03:49:31.086944 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-4nfg5" podStartSLOduration=17.393724248 podStartE2EDuration="30.086925907s" podCreationTimestamp="2026-01-07 03:49:01 +0000 UTC" firstStartedPulling="2026-01-07 03:49:10.813677478 +0000 UTC m=+997.379372263" lastFinishedPulling="2026-01-07 03:49:23.506879177 +0000 UTC m=+1010.072573922" observedRunningTime="2026-01-07 03:49:31.082404541 +0000 UTC m=+1017.648099276" watchObservedRunningTime="2026-01-07 03:49:31.086925907 +0000 UTC m=+1017.652620652" Jan 07 03:49:32 crc kubenswrapper[4980]: I0107 03:49:32.073355 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:32 crc kubenswrapper[4980]: I0107 03:49:32.073465 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:49:32 crc kubenswrapper[4980]: I0107 03:49:32.542921 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:32 crc kubenswrapper[4980]: I0107 03:49:32.623855 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.083251 4980 generic.go:334] "Generic (PLEG): container finished" podID="3336b2a3-f175-44d1-9771-adabe71eea6c" containerID="8ce3d729d523a6d8e0db02d381a802e8a91f0414782ff3aee5cfd80100a293f6" exitCode=0 Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.083389 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3336b2a3-f175-44d1-9771-adabe71eea6c","Type":"ContainerDied","Data":"8ce3d729d523a6d8e0db02d381a802e8a91f0414782ff3aee5cfd80100a293f6"} Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.087610 4980 generic.go:334] "Generic (PLEG): container finished" podID="c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2" containerID="ddc13980e39ded47b88c0338d62a55470d35d70b092095c388cabc25032cd543" exitCode=0 Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.087726 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2","Type":"ContainerDied","Data":"ddc13980e39ded47b88c0338d62a55470d35d70b092095c388cabc25032cd543"} Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.160190 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.450090 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-4fmrk"] Jan 07 03:49:33 crc kubenswrapper[4980]: E0107 03:49:33.450686 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2" containerName="init" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.450704 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2" containerName="init" Jan 07 03:49:33 crc kubenswrapper[4980]: E0107 03:49:33.450716 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2" containerName="dnsmasq-dns" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.450722 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2" containerName="dnsmasq-dns" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.450861 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="55f2ec5a-9ef5-4f32-aaa9-c8c8972035c2" containerName="dnsmasq-dns" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.451618 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.452919 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.461873 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-4fmrk"] Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.522163 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-4fmrk\" (UID: \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\") " pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.522243 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-config\") pod \"dnsmasq-dns-7f896c8c65-4fmrk\" (UID: \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\") " pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.522290 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zqts\" (UniqueName: \"kubernetes.io/projected/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-kube-api-access-9zqts\") pod \"dnsmasq-dns-7f896c8c65-4fmrk\" (UID: \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\") " pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.522321 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-4fmrk\" (UID: \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\") " pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.558847 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-hjmmm"] Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.559941 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.564300 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.572187 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hjmmm"] Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.623523 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-4fmrk\" (UID: \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\") " pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.623631 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-4fmrk\" (UID: \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\") " pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.623672 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-config\") pod \"dnsmasq-dns-7f896c8c65-4fmrk\" (UID: \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\") " pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.623713 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zqts\" (UniqueName: \"kubernetes.io/projected/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-kube-api-access-9zqts\") pod \"dnsmasq-dns-7f896c8c65-4fmrk\" (UID: \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\") " pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.624721 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-4fmrk\" (UID: \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\") " pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.624755 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-config\") pod \"dnsmasq-dns-7f896c8c65-4fmrk\" (UID: \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\") " pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.625277 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-4fmrk\" (UID: \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\") " pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.645376 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zqts\" (UniqueName: \"kubernetes.io/projected/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-kube-api-access-9zqts\") pod \"dnsmasq-dns-7f896c8c65-4fmrk\" (UID: \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\") " pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.724756 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8f6a4d2-652b-4f9a-ad2e-b974c9062112-config\") pod \"ovn-controller-metrics-hjmmm\" (UID: \"b8f6a4d2-652b-4f9a-ad2e-b974c9062112\") " pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.724809 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b8f6a4d2-652b-4f9a-ad2e-b974c9062112-ovn-rundir\") pod \"ovn-controller-metrics-hjmmm\" (UID: \"b8f6a4d2-652b-4f9a-ad2e-b974c9062112\") " pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.724854 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b8f6a4d2-652b-4f9a-ad2e-b974c9062112-ovs-rundir\") pod \"ovn-controller-metrics-hjmmm\" (UID: \"b8f6a4d2-652b-4f9a-ad2e-b974c9062112\") " pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.724991 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m2k9\" (UniqueName: \"kubernetes.io/projected/b8f6a4d2-652b-4f9a-ad2e-b974c9062112-kube-api-access-7m2k9\") pod \"ovn-controller-metrics-hjmmm\" (UID: \"b8f6a4d2-652b-4f9a-ad2e-b974c9062112\") " pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.725052 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f6a4d2-652b-4f9a-ad2e-b974c9062112-combined-ca-bundle\") pod \"ovn-controller-metrics-hjmmm\" (UID: \"b8f6a4d2-652b-4f9a-ad2e-b974c9062112\") " pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.725320 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8f6a4d2-652b-4f9a-ad2e-b974c9062112-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hjmmm\" (UID: \"b8f6a4d2-652b-4f9a-ad2e-b974c9062112\") " pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.772093 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.826539 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8f6a4d2-652b-4f9a-ad2e-b974c9062112-config\") pod \"ovn-controller-metrics-hjmmm\" (UID: \"b8f6a4d2-652b-4f9a-ad2e-b974c9062112\") " pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.827188 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b8f6a4d2-652b-4f9a-ad2e-b974c9062112-ovn-rundir\") pod \"ovn-controller-metrics-hjmmm\" (UID: \"b8f6a4d2-652b-4f9a-ad2e-b974c9062112\") " pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.827713 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b8f6a4d2-652b-4f9a-ad2e-b974c9062112-ovn-rundir\") pod \"ovn-controller-metrics-hjmmm\" (UID: \"b8f6a4d2-652b-4f9a-ad2e-b974c9062112\") " pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.828008 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b8f6a4d2-652b-4f9a-ad2e-b974c9062112-ovs-rundir\") pod \"ovn-controller-metrics-hjmmm\" (UID: \"b8f6a4d2-652b-4f9a-ad2e-b974c9062112\") " pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.828040 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m2k9\" (UniqueName: \"kubernetes.io/projected/b8f6a4d2-652b-4f9a-ad2e-b974c9062112-kube-api-access-7m2k9\") pod \"ovn-controller-metrics-hjmmm\" (UID: \"b8f6a4d2-652b-4f9a-ad2e-b974c9062112\") " pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.828137 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b8f6a4d2-652b-4f9a-ad2e-b974c9062112-ovs-rundir\") pod \"ovn-controller-metrics-hjmmm\" (UID: \"b8f6a4d2-652b-4f9a-ad2e-b974c9062112\") " pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.829467 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.829605 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f6a4d2-652b-4f9a-ad2e-b974c9062112-combined-ca-bundle\") pod \"ovn-controller-metrics-hjmmm\" (UID: \"b8f6a4d2-652b-4f9a-ad2e-b974c9062112\") " pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.829693 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8f6a4d2-652b-4f9a-ad2e-b974c9062112-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hjmmm\" (UID: \"b8f6a4d2-652b-4f9a-ad2e-b974c9062112\") " pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.834106 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8f6a4d2-652b-4f9a-ad2e-b974c9062112-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hjmmm\" (UID: \"b8f6a4d2-652b-4f9a-ad2e-b974c9062112\") " pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.838036 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f6a4d2-652b-4f9a-ad2e-b974c9062112-combined-ca-bundle\") pod \"ovn-controller-metrics-hjmmm\" (UID: \"b8f6a4d2-652b-4f9a-ad2e-b974c9062112\") " pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.838097 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8f6a4d2-652b-4f9a-ad2e-b974c9062112-config\") pod \"ovn-controller-metrics-hjmmm\" (UID: \"b8f6a4d2-652b-4f9a-ad2e-b974c9062112\") " pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.843461 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m2k9\" (UniqueName: \"kubernetes.io/projected/b8f6a4d2-652b-4f9a-ad2e-b974c9062112-kube-api-access-7m2k9\") pod \"ovn-controller-metrics-hjmmm\" (UID: \"b8f6a4d2-652b-4f9a-ad2e-b974c9062112\") " pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.864604 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-4fmrk"] Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.876520 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hjmmm" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.900236 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-n59pz"] Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.907863 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.918268 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-n59pz"] Jan 07 03:49:33 crc kubenswrapper[4980]: I0107 03:49:33.919483 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.040879 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-n59pz\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.040971 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-n59pz\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.041026 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-config\") pod \"dnsmasq-dns-86db49b7ff-n59pz\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.041048 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-n59pz\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.041097 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxjcp\" (UniqueName: \"kubernetes.io/projected/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-kube-api-access-hxjcp\") pod \"dnsmasq-dns-86db49b7ff-n59pz\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.111721 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3336b2a3-f175-44d1-9771-adabe71eea6c","Type":"ContainerStarted","Data":"ee7eaf1ec6e72bef2258be6a1b9dc42e43e1c911d975548911adf391366926cc"} Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.116493 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2","Type":"ContainerStarted","Data":"17aa0ce52e900e7d4cb9a74bce139a002a0b191560b8821907d2eb1f5b773223"} Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.143979 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-n59pz\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.144030 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-n59pz\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.144058 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-config\") pod \"dnsmasq-dns-86db49b7ff-n59pz\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.144079 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-n59pz\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.144161 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxjcp\" (UniqueName: \"kubernetes.io/projected/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-kube-api-access-hxjcp\") pod \"dnsmasq-dns-86db49b7ff-n59pz\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.145338 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-n59pz\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.145386 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-config\") pod \"dnsmasq-dns-86db49b7ff-n59pz\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.145401 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-n59pz\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.146026 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-n59pz\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.148785 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=26.632021939 podStartE2EDuration="40.148769515s" podCreationTimestamp="2026-01-07 03:48:54 +0000 UTC" firstStartedPulling="2026-01-07 03:49:10.749236436 +0000 UTC m=+997.314931181" lastFinishedPulling="2026-01-07 03:49:24.265983972 +0000 UTC m=+1010.831678757" observedRunningTime="2026-01-07 03:49:34.131889363 +0000 UTC m=+1020.697584098" watchObservedRunningTime="2026-01-07 03:49:34.148769515 +0000 UTC m=+1020.714464250" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.161869 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.924406088 podStartE2EDuration="41.161852906s" podCreationTimestamp="2026-01-07 03:48:53 +0000 UTC" firstStartedPulling="2026-01-07 03:49:10.749289657 +0000 UTC m=+997.314984432" lastFinishedPulling="2026-01-07 03:49:24.986736475 +0000 UTC m=+1011.552431250" observedRunningTime="2026-01-07 03:49:34.156899117 +0000 UTC m=+1020.722593852" watchObservedRunningTime="2026-01-07 03:49:34.161852906 +0000 UTC m=+1020.727547631" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.172296 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxjcp\" (UniqueName: \"kubernetes.io/projected/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-kube-api-access-hxjcp\") pod \"dnsmasq-dns-86db49b7ff-n59pz\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.185848 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hjmmm"] Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.250648 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.312862 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-4fmrk"] Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.406961 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.406992 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 07 03:49:34 crc kubenswrapper[4980]: I0107 03:49:34.829643 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-n59pz"] Jan 07 03:49:34 crc kubenswrapper[4980]: W0107 03:49:34.838277 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbda1e80c_7ad2_4bda_a977_0d96c9f2c767.slice/crio-213be9ca7f13cde7325061bd79cf41ec6114baa6dc07b7749bf839419e070333 WatchSource:0}: Error finding container 213be9ca7f13cde7325061bd79cf41ec6114baa6dc07b7749bf839419e070333: Status 404 returned error can't find the container with id 213be9ca7f13cde7325061bd79cf41ec6114baa6dc07b7749bf839419e070333 Jan 07 03:49:35 crc kubenswrapper[4980]: I0107 03:49:35.128811 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" event={"ID":"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9","Type":"ContainerStarted","Data":"09c013ca09903024133808198176b41492436be91521615430e01b1c7275a65b"} Jan 07 03:49:35 crc kubenswrapper[4980]: I0107 03:49:35.128876 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" event={"ID":"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9","Type":"ContainerStarted","Data":"effb563c9116b10f348c61d65ae6c1106fb1597f5668b00bffa4ba35aeffb2dd"} Jan 07 03:49:35 crc kubenswrapper[4980]: I0107 03:49:35.135398 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hjmmm" event={"ID":"b8f6a4d2-652b-4f9a-ad2e-b974c9062112","Type":"ContainerStarted","Data":"b1d3f9c373a012f60b1ee584ed2a1ceeeda2a5b5c38c75d4bcd1f8944976b703"} Jan 07 03:49:35 crc kubenswrapper[4980]: I0107 03:49:35.135477 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hjmmm" event={"ID":"b8f6a4d2-652b-4f9a-ad2e-b974c9062112","Type":"ContainerStarted","Data":"33570d52a6f486dff0ab2ccf155671984d372a2d936605863d6c8c954647a58c"} Jan 07 03:49:35 crc kubenswrapper[4980]: I0107 03:49:35.137965 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" event={"ID":"bda1e80c-7ad2-4bda-a977-0d96c9f2c767","Type":"ContainerStarted","Data":"213be9ca7f13cde7325061bd79cf41ec6114baa6dc07b7749bf839419e070333"} Jan 07 03:49:35 crc kubenswrapper[4980]: I0107 03:49:35.876824 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 07 03:49:35 crc kubenswrapper[4980]: I0107 03:49:35.877250 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 07 03:49:36 crc kubenswrapper[4980]: I0107 03:49:36.148536 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" event={"ID":"bda1e80c-7ad2-4bda-a977-0d96c9f2c767","Type":"ContainerStarted","Data":"755ffec3f5b0eaf72751f6810975e25ab8aafc9b575e8c6991e14e84a54c2c9e"} Jan 07 03:49:36 crc kubenswrapper[4980]: I0107 03:49:36.150229 4980 generic.go:334] "Generic (PLEG): container finished" podID="c0428e1f-d9d7-41ab-b500-a6019b9d3ed9" containerID="09c013ca09903024133808198176b41492436be91521615430e01b1c7275a65b" exitCode=0 Jan 07 03:49:36 crc kubenswrapper[4980]: I0107 03:49:36.150323 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" event={"ID":"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9","Type":"ContainerDied","Data":"09c013ca09903024133808198176b41492436be91521615430e01b1c7275a65b"} Jan 07 03:49:36 crc kubenswrapper[4980]: I0107 03:49:36.227610 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-hjmmm" podStartSLOduration=3.22757922 podStartE2EDuration="3.22757922s" podCreationTimestamp="2026-01-07 03:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:49:36.211937637 +0000 UTC m=+1022.777632412" watchObservedRunningTime="2026-01-07 03:49:36.22757922 +0000 UTC m=+1022.793273995" Jan 07 03:49:36 crc kubenswrapper[4980]: I0107 03:49:36.267825 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 07 03:49:36 crc kubenswrapper[4980]: I0107 03:49:36.555793 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" Jan 07 03:49:36 crc kubenswrapper[4980]: I0107 03:49:36.592486 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-dns-svc\") pod \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\" (UID: \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\") " Jan 07 03:49:36 crc kubenswrapper[4980]: I0107 03:49:36.592534 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zqts\" (UniqueName: \"kubernetes.io/projected/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-kube-api-access-9zqts\") pod \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\" (UID: \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\") " Jan 07 03:49:36 crc kubenswrapper[4980]: I0107 03:49:36.592595 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-config\") pod \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\" (UID: \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\") " Jan 07 03:49:36 crc kubenswrapper[4980]: I0107 03:49:36.592621 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-ovsdbserver-sb\") pod \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\" (UID: \"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9\") " Jan 07 03:49:36 crc kubenswrapper[4980]: I0107 03:49:36.604488 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-kube-api-access-9zqts" (OuterVolumeSpecName: "kube-api-access-9zqts") pod "c0428e1f-d9d7-41ab-b500-a6019b9d3ed9" (UID: "c0428e1f-d9d7-41ab-b500-a6019b9d3ed9"). InnerVolumeSpecName "kube-api-access-9zqts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:49:36 crc kubenswrapper[4980]: I0107 03:49:36.617548 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c0428e1f-d9d7-41ab-b500-a6019b9d3ed9" (UID: "c0428e1f-d9d7-41ab-b500-a6019b9d3ed9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:36 crc kubenswrapper[4980]: I0107 03:49:36.620338 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c0428e1f-d9d7-41ab-b500-a6019b9d3ed9" (UID: "c0428e1f-d9d7-41ab-b500-a6019b9d3ed9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:36 crc kubenswrapper[4980]: I0107 03:49:36.624455 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-config" (OuterVolumeSpecName: "config") pod "c0428e1f-d9d7-41ab-b500-a6019b9d3ed9" (UID: "c0428e1f-d9d7-41ab-b500-a6019b9d3ed9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:36 crc kubenswrapper[4980]: I0107 03:49:36.700162 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:36 crc kubenswrapper[4980]: I0107 03:49:36.700477 4980 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:36 crc kubenswrapper[4980]: I0107 03:49:36.700598 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zqts\" (UniqueName: \"kubernetes.io/projected/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-kube-api-access-9zqts\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:36 crc kubenswrapper[4980]: I0107 03:49:36.700682 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:37 crc kubenswrapper[4980]: I0107 03:49:37.158380 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" event={"ID":"c0428e1f-d9d7-41ab-b500-a6019b9d3ed9","Type":"ContainerDied","Data":"effb563c9116b10f348c61d65ae6c1106fb1597f5668b00bffa4ba35aeffb2dd"} Jan 07 03:49:37 crc kubenswrapper[4980]: I0107 03:49:37.158446 4980 scope.go:117] "RemoveContainer" containerID="09c013ca09903024133808198176b41492436be91521615430e01b1c7275a65b" Jan 07 03:49:37 crc kubenswrapper[4980]: I0107 03:49:37.158403 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-4fmrk" Jan 07 03:49:37 crc kubenswrapper[4980]: I0107 03:49:37.279825 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-4fmrk"] Jan 07 03:49:37 crc kubenswrapper[4980]: I0107 03:49:37.326195 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-4fmrk"] Jan 07 03:49:37 crc kubenswrapper[4980]: I0107 03:49:37.748839 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0428e1f-d9d7-41ab-b500-a6019b9d3ed9" path="/var/lib/kubelet/pods/c0428e1f-d9d7-41ab-b500-a6019b9d3ed9/volumes" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.167835 4980 generic.go:334] "Generic (PLEG): container finished" podID="bda1e80c-7ad2-4bda-a977-0d96c9f2c767" containerID="755ffec3f5b0eaf72751f6810975e25ab8aafc9b575e8c6991e14e84a54c2c9e" exitCode=0 Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.167890 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" event={"ID":"bda1e80c-7ad2-4bda-a977-0d96c9f2c767","Type":"ContainerDied","Data":"755ffec3f5b0eaf72751f6810975e25ab8aafc9b575e8c6991e14e84a54c2c9e"} Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.465404 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-n59pz"] Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.528781 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-prhnz"] Jan 07 03:49:38 crc kubenswrapper[4980]: E0107 03:49:38.538778 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0428e1f-d9d7-41ab-b500-a6019b9d3ed9" containerName="init" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.538812 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0428e1f-d9d7-41ab-b500-a6019b9d3ed9" containerName="init" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.539084 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0428e1f-d9d7-41ab-b500-a6019b9d3ed9" containerName="init" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.539891 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.553064 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-prhnz"] Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.651482 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-dns-svc\") pod \"dnsmasq-dns-698758b865-prhnz\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.651577 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcrvh\" (UniqueName: \"kubernetes.io/projected/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-kube-api-access-fcrvh\") pod \"dnsmasq-dns-698758b865-prhnz\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.651628 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-prhnz\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.651684 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-config\") pod \"dnsmasq-dns-698758b865-prhnz\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.651733 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-prhnz\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.753182 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcrvh\" (UniqueName: \"kubernetes.io/projected/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-kube-api-access-fcrvh\") pod \"dnsmasq-dns-698758b865-prhnz\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.753248 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-prhnz\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.753297 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-config\") pod \"dnsmasq-dns-698758b865-prhnz\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.753338 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-prhnz\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.753370 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-dns-svc\") pod \"dnsmasq-dns-698758b865-prhnz\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.754124 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-config\") pod \"dnsmasq-dns-698758b865-prhnz\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.754380 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-dns-svc\") pod \"dnsmasq-dns-698758b865-prhnz\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.754606 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-prhnz\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.754994 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-prhnz\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.770761 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcrvh\" (UniqueName: \"kubernetes.io/projected/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-kube-api-access-fcrvh\") pod \"dnsmasq-dns-698758b865-prhnz\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:38 crc kubenswrapper[4980]: I0107 03:49:38.909752 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.177720 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" event={"ID":"bda1e80c-7ad2-4bda-a977-0d96c9f2c767","Type":"ContainerStarted","Data":"3ad409d4fd9459118fa1527228ba2d4e955fe3b4dffb8c881c9344b613abbbad"} Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.178202 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.177831 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" podUID="bda1e80c-7ad2-4bda-a977-0d96c9f2c767" containerName="dnsmasq-dns" containerID="cri-o://3ad409d4fd9459118fa1527228ba2d4e955fe3b4dffb8c881c9344b613abbbad" gracePeriod=10 Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.183245 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f3fa7e62-6ab5-4edb-9311-9b49a85c766b","Type":"ContainerStarted","Data":"51f38411bc53cb1db055bab6f3da7535a99d2f5ed034348c25f0d72319b43023"} Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.183520 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.199237 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" podStartSLOduration=6.199221998 podStartE2EDuration="6.199221998s" podCreationTimestamp="2026-01-07 03:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:49:39.197643098 +0000 UTC m=+1025.763337833" watchObservedRunningTime="2026-01-07 03:49:39.199221998 +0000 UTC m=+1025.764916733" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.216125 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=13.667633392 podStartE2EDuration="41.216105591s" podCreationTimestamp="2026-01-07 03:48:58 +0000 UTC" firstStartedPulling="2026-01-07 03:49:10.816642483 +0000 UTC m=+997.382337268" lastFinishedPulling="2026-01-07 03:49:38.365114732 +0000 UTC m=+1024.930809467" observedRunningTime="2026-01-07 03:49:39.211239235 +0000 UTC m=+1025.776933970" watchObservedRunningTime="2026-01-07 03:49:39.216105591 +0000 UTC m=+1025.781800336" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.356515 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-prhnz"] Jan 07 03:49:39 crc kubenswrapper[4980]: W0107 03:49:39.369784 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e3ab5a9_26d4_451c_ab15_8f8aae9e17d9.slice/crio-af736be5ed371a634cfd3faffde7dee1ae6c1e274f2e904a38c1f971c66fca29 WatchSource:0}: Error finding container af736be5ed371a634cfd3faffde7dee1ae6c1e274f2e904a38c1f971c66fca29: Status 404 returned error can't find the container with id af736be5ed371a634cfd3faffde7dee1ae6c1e274f2e904a38c1f971c66fca29 Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.622708 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.701004 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.704711 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.704724 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.704744 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.705997 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-xj69b" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.706225 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.800023 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.800225 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6b5878bb-8928-4957-a27d-ce18da212460-lock\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.800274 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.800317 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg6vq\" (UniqueName: \"kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-kube-api-access-qg6vq\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.800346 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6b5878bb-8928-4957-a27d-ce18da212460-cache\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.903770 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.903857 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6b5878bb-8928-4957-a27d-ce18da212460-lock\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.903883 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.903907 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg6vq\" (UniqueName: \"kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-kube-api-access-qg6vq\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.903925 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6b5878bb-8928-4957-a27d-ce18da212460-cache\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.904441 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6b5878bb-8928-4957-a27d-ce18da212460-cache\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:39 crc kubenswrapper[4980]: E0107 03:49:39.904566 4980 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 07 03:49:39 crc kubenswrapper[4980]: E0107 03:49:39.904586 4980 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 07 03:49:39 crc kubenswrapper[4980]: E0107 03:49:39.904634 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift podName:6b5878bb-8928-4957-a27d-ce18da212460 nodeName:}" failed. No retries permitted until 2026-01-07 03:49:40.404613017 +0000 UTC m=+1026.970307752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift") pod "swift-storage-0" (UID: "6b5878bb-8928-4957-a27d-ce18da212460") : configmap "swift-ring-files" not found Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.907744 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6b5878bb-8928-4957-a27d-ce18da212460-lock\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.908098 4980 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.928689 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg6vq\" (UniqueName: \"kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-kube-api-access-qg6vq\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.936037 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:39 crc kubenswrapper[4980]: I0107 03:49:39.937311 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.005015 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-ovsdbserver-nb\") pod \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.005184 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-config\") pod \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.005218 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxjcp\" (UniqueName: \"kubernetes.io/projected/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-kube-api-access-hxjcp\") pod \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.005259 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-ovsdbserver-sb\") pod \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.005306 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-dns-svc\") pod \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\" (UID: \"bda1e80c-7ad2-4bda-a977-0d96c9f2c767\") " Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.008953 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-kube-api-access-hxjcp" (OuterVolumeSpecName: "kube-api-access-hxjcp") pod "bda1e80c-7ad2-4bda-a977-0d96c9f2c767" (UID: "bda1e80c-7ad2-4bda-a977-0d96c9f2c767"). InnerVolumeSpecName "kube-api-access-hxjcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.038362 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bda1e80c-7ad2-4bda-a977-0d96c9f2c767" (UID: "bda1e80c-7ad2-4bda-a977-0d96c9f2c767"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.038946 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-config" (OuterVolumeSpecName: "config") pod "bda1e80c-7ad2-4bda-a977-0d96c9f2c767" (UID: "bda1e80c-7ad2-4bda-a977-0d96c9f2c767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.042750 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bda1e80c-7ad2-4bda-a977-0d96c9f2c767" (UID: "bda1e80c-7ad2-4bda-a977-0d96c9f2c767"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.044648 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bda1e80c-7ad2-4bda-a977-0d96c9f2c767" (UID: "bda1e80c-7ad2-4bda-a977-0d96c9f2c767"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.106978 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.107029 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxjcp\" (UniqueName: \"kubernetes.io/projected/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-kube-api-access-hxjcp\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.107054 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.107074 4980 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.107091 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bda1e80c-7ad2-4bda-a977-0d96c9f2c767-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.194108 4980 generic.go:334] "Generic (PLEG): container finished" podID="4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9" containerID="a82e2ee8c12dfd7a9076ec0fcf1d8b1b8d9b83145be79fbee6cf8efbd1368a37" exitCode=0 Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.194201 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-prhnz" event={"ID":"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9","Type":"ContainerDied","Data":"a82e2ee8c12dfd7a9076ec0fcf1d8b1b8d9b83145be79fbee6cf8efbd1368a37"} Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.194257 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-prhnz" event={"ID":"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9","Type":"ContainerStarted","Data":"af736be5ed371a634cfd3faffde7dee1ae6c1e274f2e904a38c1f971c66fca29"} Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.196393 4980 generic.go:334] "Generic (PLEG): container finished" podID="bda1e80c-7ad2-4bda-a977-0d96c9f2c767" containerID="3ad409d4fd9459118fa1527228ba2d4e955fe3b4dffb8c881c9344b613abbbad" exitCode=0 Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.196477 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" event={"ID":"bda1e80c-7ad2-4bda-a977-0d96c9f2c767","Type":"ContainerDied","Data":"3ad409d4fd9459118fa1527228ba2d4e955fe3b4dffb8c881c9344b613abbbad"} Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.196512 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" event={"ID":"bda1e80c-7ad2-4bda-a977-0d96c9f2c767","Type":"ContainerDied","Data":"213be9ca7f13cde7325061bd79cf41ec6114baa6dc07b7749bf839419e070333"} Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.196529 4980 scope.go:117] "RemoveContainer" containerID="3ad409d4fd9459118fa1527228ba2d4e955fe3b4dffb8c881c9344b613abbbad" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.196672 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-n59pz" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.297752 4980 scope.go:117] "RemoveContainer" containerID="755ffec3f5b0eaf72751f6810975e25ab8aafc9b575e8c6991e14e84a54c2c9e" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.304581 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-n59pz"] Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.314069 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-n59pz"] Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.327876 4980 scope.go:117] "RemoveContainer" containerID="3ad409d4fd9459118fa1527228ba2d4e955fe3b4dffb8c881c9344b613abbbad" Jan 07 03:49:40 crc kubenswrapper[4980]: E0107 03:49:40.328713 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ad409d4fd9459118fa1527228ba2d4e955fe3b4dffb8c881c9344b613abbbad\": container with ID starting with 3ad409d4fd9459118fa1527228ba2d4e955fe3b4dffb8c881c9344b613abbbad not found: ID does not exist" containerID="3ad409d4fd9459118fa1527228ba2d4e955fe3b4dffb8c881c9344b613abbbad" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.328765 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ad409d4fd9459118fa1527228ba2d4e955fe3b4dffb8c881c9344b613abbbad"} err="failed to get container status \"3ad409d4fd9459118fa1527228ba2d4e955fe3b4dffb8c881c9344b613abbbad\": rpc error: code = NotFound desc = could not find container \"3ad409d4fd9459118fa1527228ba2d4e955fe3b4dffb8c881c9344b613abbbad\": container with ID starting with 3ad409d4fd9459118fa1527228ba2d4e955fe3b4dffb8c881c9344b613abbbad not found: ID does not exist" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.328797 4980 scope.go:117] "RemoveContainer" containerID="755ffec3f5b0eaf72751f6810975e25ab8aafc9b575e8c6991e14e84a54c2c9e" Jan 07 03:49:40 crc kubenswrapper[4980]: E0107 03:49:40.331904 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"755ffec3f5b0eaf72751f6810975e25ab8aafc9b575e8c6991e14e84a54c2c9e\": container with ID starting with 755ffec3f5b0eaf72751f6810975e25ab8aafc9b575e8c6991e14e84a54c2c9e not found: ID does not exist" containerID="755ffec3f5b0eaf72751f6810975e25ab8aafc9b575e8c6991e14e84a54c2c9e" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.331950 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"755ffec3f5b0eaf72751f6810975e25ab8aafc9b575e8c6991e14e84a54c2c9e"} err="failed to get container status \"755ffec3f5b0eaf72751f6810975e25ab8aafc9b575e8c6991e14e84a54c2c9e\": rpc error: code = NotFound desc = could not find container \"755ffec3f5b0eaf72751f6810975e25ab8aafc9b575e8c6991e14e84a54c2c9e\": container with ID starting with 755ffec3f5b0eaf72751f6810975e25ab8aafc9b575e8c6991e14e84a54c2c9e not found: ID does not exist" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.412210 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:40 crc kubenswrapper[4980]: E0107 03:49:40.412474 4980 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 07 03:49:40 crc kubenswrapper[4980]: E0107 03:49:40.412723 4980 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 07 03:49:40 crc kubenswrapper[4980]: E0107 03:49:40.412779 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift podName:6b5878bb-8928-4957-a27d-ce18da212460 nodeName:}" failed. No retries permitted until 2026-01-07 03:49:41.412758764 +0000 UTC m=+1027.978453499 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift") pod "swift-storage-0" (UID: "6b5878bb-8928-4957-a27d-ce18da212460") : configmap "swift-ring-files" not found Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.658787 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 07 03:49:40 crc kubenswrapper[4980]: I0107 03:49:40.754937 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 07 03:49:41 crc kubenswrapper[4980]: I0107 03:49:41.212543 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-prhnz" event={"ID":"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9","Type":"ContainerStarted","Data":"2d8de6c6c26fcfb3dc380bfb0007f10d98f628ee967485923ae03c44e7c6f966"} Jan 07 03:49:41 crc kubenswrapper[4980]: I0107 03:49:41.213680 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:41 crc kubenswrapper[4980]: I0107 03:49:41.232911 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-prhnz" podStartSLOduration=3.232893562 podStartE2EDuration="3.232893562s" podCreationTimestamp="2026-01-07 03:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:49:41.228460189 +0000 UTC m=+1027.794154924" watchObservedRunningTime="2026-01-07 03:49:41.232893562 +0000 UTC m=+1027.798588297" Jan 07 03:49:41 crc kubenswrapper[4980]: I0107 03:49:41.433030 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:41 crc kubenswrapper[4980]: E0107 03:49:41.433298 4980 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 07 03:49:41 crc kubenswrapper[4980]: E0107 03:49:41.433328 4980 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 07 03:49:41 crc kubenswrapper[4980]: E0107 03:49:41.433392 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift podName:6b5878bb-8928-4957-a27d-ce18da212460 nodeName:}" failed. No retries permitted until 2026-01-07 03:49:43.433371377 +0000 UTC m=+1029.999066112 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift") pod "swift-storage-0" (UID: "6b5878bb-8928-4957-a27d-ce18da212460") : configmap "swift-ring-files" not found Jan 07 03:49:41 crc kubenswrapper[4980]: I0107 03:49:41.745111 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bda1e80c-7ad2-4bda-a977-0d96c9f2c767" path="/var/lib/kubelet/pods/bda1e80c-7ad2-4bda-a977-0d96c9f2c767/volumes" Jan 07 03:49:42 crc kubenswrapper[4980]: I0107 03:49:42.225091 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bc84f69f-9bab-40e5-80a8-75266ef8f4b7","Type":"ContainerStarted","Data":"270cf4c97a788780ff186621ecc9eb0755738e7685fba5fccf3eeef5b4d38195"} Jan 07 03:49:42 crc kubenswrapper[4980]: I0107 03:49:42.248735 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.587971631 podStartE2EDuration="39.24870948s" podCreationTimestamp="2026-01-07 03:49:03 +0000 UTC" firstStartedPulling="2026-01-07 03:49:10.813949006 +0000 UTC m=+997.379643781" lastFinishedPulling="2026-01-07 03:49:41.474686895 +0000 UTC m=+1028.040381630" observedRunningTime="2026-01-07 03:49:42.245059363 +0000 UTC m=+1028.810754108" watchObservedRunningTime="2026-01-07 03:49:42.24870948 +0000 UTC m=+1028.814404225" Jan 07 03:49:42 crc kubenswrapper[4980]: I0107 03:49:42.479062 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 07 03:49:42 crc kubenswrapper[4980]: I0107 03:49:42.711173 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="3336b2a3-f175-44d1-9771-adabe71eea6c" containerName="galera" probeResult="failure" output=< Jan 07 03:49:42 crc kubenswrapper[4980]: wsrep_local_state_comment (Joined) differs from Synced Jan 07 03:49:42 crc kubenswrapper[4980]: > Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.097645 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hxhcv"] Jan 07 03:49:43 crc kubenswrapper[4980]: E0107 03:49:43.098579 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda1e80c-7ad2-4bda-a977-0d96c9f2c767" containerName="init" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.098611 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda1e80c-7ad2-4bda-a977-0d96c9f2c767" containerName="init" Jan 07 03:49:43 crc kubenswrapper[4980]: E0107 03:49:43.098671 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda1e80c-7ad2-4bda-a977-0d96c9f2c767" containerName="dnsmasq-dns" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.098686 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda1e80c-7ad2-4bda-a977-0d96c9f2c767" containerName="dnsmasq-dns" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.098989 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="bda1e80c-7ad2-4bda-a977-0d96c9f2c767" containerName="dnsmasq-dns" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.099869 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hxhcv" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.105685 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.108215 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hxhcv"] Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.163795 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc879\" (UniqueName: \"kubernetes.io/projected/5df09f52-8104-48c9-940a-9d0379637acc-kube-api-access-qc879\") pod \"root-account-create-update-hxhcv\" (UID: \"5df09f52-8104-48c9-940a-9d0379637acc\") " pod="openstack/root-account-create-update-hxhcv" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.163914 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df09f52-8104-48c9-940a-9d0379637acc-operator-scripts\") pod \"root-account-create-update-hxhcv\" (UID: \"5df09f52-8104-48c9-940a-9d0379637acc\") " pod="openstack/root-account-create-update-hxhcv" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.264795 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc879\" (UniqueName: \"kubernetes.io/projected/5df09f52-8104-48c9-940a-9d0379637acc-kube-api-access-qc879\") pod \"root-account-create-update-hxhcv\" (UID: \"5df09f52-8104-48c9-940a-9d0379637acc\") " pod="openstack/root-account-create-update-hxhcv" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.264911 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df09f52-8104-48c9-940a-9d0379637acc-operator-scripts\") pod \"root-account-create-update-hxhcv\" (UID: \"5df09f52-8104-48c9-940a-9d0379637acc\") " pod="openstack/root-account-create-update-hxhcv" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.266188 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df09f52-8104-48c9-940a-9d0379637acc-operator-scripts\") pod \"root-account-create-update-hxhcv\" (UID: \"5df09f52-8104-48c9-940a-9d0379637acc\") " pod="openstack/root-account-create-update-hxhcv" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.290118 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc879\" (UniqueName: \"kubernetes.io/projected/5df09f52-8104-48c9-940a-9d0379637acc-kube-api-access-qc879\") pod \"root-account-create-update-hxhcv\" (UID: \"5df09f52-8104-48c9-940a-9d0379637acc\") " pod="openstack/root-account-create-update-hxhcv" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.467742 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:43 crc kubenswrapper[4980]: E0107 03:49:43.467913 4980 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 07 03:49:43 crc kubenswrapper[4980]: E0107 03:49:43.467930 4980 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 07 03:49:43 crc kubenswrapper[4980]: E0107 03:49:43.467980 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift podName:6b5878bb-8928-4957-a27d-ce18da212460 nodeName:}" failed. No retries permitted until 2026-01-07 03:49:47.467964609 +0000 UTC m=+1034.033659344 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift") pod "swift-storage-0" (UID: "6b5878bb-8928-4957-a27d-ce18da212460") : configmap "swift-ring-files" not found Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.475161 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hxhcv" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.534972 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-89vfr"] Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.536366 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.539181 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.539306 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.539590 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.550893 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-89vfr"] Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.568995 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-swiftconf\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.569030 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-dispersionconf\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.569072 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-combined-ca-bundle\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.569109 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-etc-swift\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.569129 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxlm6\" (UniqueName: \"kubernetes.io/projected/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-kube-api-access-dxlm6\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.569156 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-scripts\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.569184 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-ring-data-devices\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.670863 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-swiftconf\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.671189 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-dispersionconf\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.671254 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-combined-ca-bundle\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.671354 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-etc-swift\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.671386 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxlm6\" (UniqueName: \"kubernetes.io/projected/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-kube-api-access-dxlm6\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.671444 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-scripts\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.671500 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-ring-data-devices\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.672263 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-etc-swift\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.672331 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-ring-data-devices\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.672778 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-scripts\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.674951 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-swiftconf\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.675885 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-dispersionconf\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.676234 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-combined-ca-bundle\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.693905 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxlm6\" (UniqueName: \"kubernetes.io/projected/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-kube-api-access-dxlm6\") pod \"swift-ring-rebalance-89vfr\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.726246 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hxhcv"] Jan 07 03:49:43 crc kubenswrapper[4980]: W0107 03:49:43.730944 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5df09f52_8104_48c9_940a_9d0379637acc.slice/crio-54296577f9d1b645e178e21a401670d71fa462a24d75f649d9be734ea1be88f0 WatchSource:0}: Error finding container 54296577f9d1b645e178e21a401670d71fa462a24d75f649d9be734ea1be88f0: Status 404 returned error can't find the container with id 54296577f9d1b645e178e21a401670d71fa462a24d75f649d9be734ea1be88f0 Jan 07 03:49:43 crc kubenswrapper[4980]: I0107 03:49:43.917528 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:49:44 crc kubenswrapper[4980]: I0107 03:49:44.177658 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-89vfr"] Jan 07 03:49:44 crc kubenswrapper[4980]: I0107 03:49:44.244346 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-89vfr" event={"ID":"a3566d37-de40-4834-9bbc-48dc6fe7e9c5","Type":"ContainerStarted","Data":"a269131b3d3f2659718db2627ac5065e4c7024005926fc03114dcde5029a7b49"} Jan 07 03:49:44 crc kubenswrapper[4980]: I0107 03:49:44.246386 4980 generic.go:334] "Generic (PLEG): container finished" podID="5df09f52-8104-48c9-940a-9d0379637acc" containerID="a62ea589814670e969aec9e78c97c580b8d1538f3796411cbe160a4044c9258d" exitCode=0 Jan 07 03:49:44 crc kubenswrapper[4980]: I0107 03:49:44.246424 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hxhcv" event={"ID":"5df09f52-8104-48c9-940a-9d0379637acc","Type":"ContainerDied","Data":"a62ea589814670e969aec9e78c97c580b8d1538f3796411cbe160a4044c9258d"} Jan 07 03:49:44 crc kubenswrapper[4980]: I0107 03:49:44.246452 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hxhcv" event={"ID":"5df09f52-8104-48c9-940a-9d0379637acc","Type":"ContainerStarted","Data":"54296577f9d1b645e178e21a401670d71fa462a24d75f649d9be734ea1be88f0"} Jan 07 03:49:44 crc kubenswrapper[4980]: I0107 03:49:44.308906 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:44 crc kubenswrapper[4980]: I0107 03:49:44.370101 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:45 crc kubenswrapper[4980]: I0107 03:49:45.257475 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:45 crc kubenswrapper[4980]: I0107 03:49:45.661093 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hxhcv" Jan 07 03:49:45 crc kubenswrapper[4980]: I0107 03:49:45.724153 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df09f52-8104-48c9-940a-9d0379637acc-operator-scripts\") pod \"5df09f52-8104-48c9-940a-9d0379637acc\" (UID: \"5df09f52-8104-48c9-940a-9d0379637acc\") " Jan 07 03:49:45 crc kubenswrapper[4980]: I0107 03:49:45.724724 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc879\" (UniqueName: \"kubernetes.io/projected/5df09f52-8104-48c9-940a-9d0379637acc-kube-api-access-qc879\") pod \"5df09f52-8104-48c9-940a-9d0379637acc\" (UID: \"5df09f52-8104-48c9-940a-9d0379637acc\") " Jan 07 03:49:45 crc kubenswrapper[4980]: I0107 03:49:45.725297 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5df09f52-8104-48c9-940a-9d0379637acc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5df09f52-8104-48c9-940a-9d0379637acc" (UID: "5df09f52-8104-48c9-940a-9d0379637acc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:45 crc kubenswrapper[4980]: I0107 03:49:45.752391 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df09f52-8104-48c9-940a-9d0379637acc-kube-api-access-qc879" (OuterVolumeSpecName: "kube-api-access-qc879") pod "5df09f52-8104-48c9-940a-9d0379637acc" (UID: "5df09f52-8104-48c9-940a-9d0379637acc"). InnerVolumeSpecName "kube-api-access-qc879". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:49:45 crc kubenswrapper[4980]: I0107 03:49:45.827056 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df09f52-8104-48c9-940a-9d0379637acc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:45 crc kubenswrapper[4980]: I0107 03:49:45.827090 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc879\" (UniqueName: \"kubernetes.io/projected/5df09f52-8104-48c9-940a-9d0379637acc-kube-api-access-qc879\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:45 crc kubenswrapper[4980]: I0107 03:49:45.885602 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-pcrf5"] Jan 07 03:49:45 crc kubenswrapper[4980]: E0107 03:49:45.885955 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df09f52-8104-48c9-940a-9d0379637acc" containerName="mariadb-account-create-update" Jan 07 03:49:45 crc kubenswrapper[4980]: I0107 03:49:45.885967 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df09f52-8104-48c9-940a-9d0379637acc" containerName="mariadb-account-create-update" Jan 07 03:49:45 crc kubenswrapper[4980]: I0107 03:49:45.886121 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df09f52-8104-48c9-940a-9d0379637acc" containerName="mariadb-account-create-update" Jan 07 03:49:45 crc kubenswrapper[4980]: I0107 03:49:45.886649 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pcrf5" Jan 07 03:49:45 crc kubenswrapper[4980]: I0107 03:49:45.894792 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pcrf5"] Jan 07 03:49:45 crc kubenswrapper[4980]: I0107 03:49:45.928655 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8030272c-dd7c-4eb4-822f-29fdff143d62-operator-scripts\") pod \"keystone-db-create-pcrf5\" (UID: \"8030272c-dd7c-4eb4-822f-29fdff143d62\") " pod="openstack/keystone-db-create-pcrf5" Jan 07 03:49:45 crc kubenswrapper[4980]: I0107 03:49:45.928767 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gkkk\" (UniqueName: \"kubernetes.io/projected/8030272c-dd7c-4eb4-822f-29fdff143d62-kube-api-access-5gkkk\") pod \"keystone-db-create-pcrf5\" (UID: \"8030272c-dd7c-4eb4-822f-29fdff143d62\") " pod="openstack/keystone-db-create-pcrf5" Jan 07 03:49:45 crc kubenswrapper[4980]: I0107 03:49:45.973718 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.030240 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gkkk\" (UniqueName: \"kubernetes.io/projected/8030272c-dd7c-4eb4-822f-29fdff143d62-kube-api-access-5gkkk\") pod \"keystone-db-create-pcrf5\" (UID: \"8030272c-dd7c-4eb4-822f-29fdff143d62\") " pod="openstack/keystone-db-create-pcrf5" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.030340 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8030272c-dd7c-4eb4-822f-29fdff143d62-operator-scripts\") pod \"keystone-db-create-pcrf5\" (UID: \"8030272c-dd7c-4eb4-822f-29fdff143d62\") " pod="openstack/keystone-db-create-pcrf5" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.031425 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8030272c-dd7c-4eb4-822f-29fdff143d62-operator-scripts\") pod \"keystone-db-create-pcrf5\" (UID: \"8030272c-dd7c-4eb4-822f-29fdff143d62\") " pod="openstack/keystone-db-create-pcrf5" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.048809 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-58ef-account-create-update-hg8wd"] Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.049870 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-58ef-account-create-update-hg8wd" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.057542 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.062813 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gkkk\" (UniqueName: \"kubernetes.io/projected/8030272c-dd7c-4eb4-822f-29fdff143d62-kube-api-access-5gkkk\") pod \"keystone-db-create-pcrf5\" (UID: \"8030272c-dd7c-4eb4-822f-29fdff143d62\") " pod="openstack/keystone-db-create-pcrf5" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.064782 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-58ef-account-create-update-hg8wd"] Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.214764 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pcrf5" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.236911 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfj6d\" (UniqueName: \"kubernetes.io/projected/cb24e5f0-b43d-4512-ab23-b340b8b97c1d-kube-api-access-mfj6d\") pod \"keystone-58ef-account-create-update-hg8wd\" (UID: \"cb24e5f0-b43d-4512-ab23-b340b8b97c1d\") " pod="openstack/keystone-58ef-account-create-update-hg8wd" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.236997 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb24e5f0-b43d-4512-ab23-b340b8b97c1d-operator-scripts\") pod \"keystone-58ef-account-create-update-hg8wd\" (UID: \"cb24e5f0-b43d-4512-ab23-b340b8b97c1d\") " pod="openstack/keystone-58ef-account-create-update-hg8wd" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.241751 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-wf7kv"] Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.244110 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wf7kv" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.253233 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-41b5-account-create-update-wk7dn"] Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.254448 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-41b5-account-create-update-wk7dn" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.256162 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.268327 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hxhcv" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.268325 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hxhcv" event={"ID":"5df09f52-8104-48c9-940a-9d0379637acc","Type":"ContainerDied","Data":"54296577f9d1b645e178e21a401670d71fa462a24d75f649d9be734ea1be88f0"} Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.268369 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54296577f9d1b645e178e21a401670d71fa462a24d75f649d9be734ea1be88f0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.275024 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wf7kv"] Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.293506 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-41b5-account-create-update-wk7dn"] Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.322279 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.338385 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfj6d\" (UniqueName: \"kubernetes.io/projected/cb24e5f0-b43d-4512-ab23-b340b8b97c1d-kube-api-access-mfj6d\") pod \"keystone-58ef-account-create-update-hg8wd\" (UID: \"cb24e5f0-b43d-4512-ab23-b340b8b97c1d\") " pod="openstack/keystone-58ef-account-create-update-hg8wd" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.338433 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb24e5f0-b43d-4512-ab23-b340b8b97c1d-operator-scripts\") pod \"keystone-58ef-account-create-update-hg8wd\" (UID: \"cb24e5f0-b43d-4512-ab23-b340b8b97c1d\") " pod="openstack/keystone-58ef-account-create-update-hg8wd" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.339365 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb24e5f0-b43d-4512-ab23-b340b8b97c1d-operator-scripts\") pod \"keystone-58ef-account-create-update-hg8wd\" (UID: \"cb24e5f0-b43d-4512-ab23-b340b8b97c1d\") " pod="openstack/keystone-58ef-account-create-update-hg8wd" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.359239 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfj6d\" (UniqueName: \"kubernetes.io/projected/cb24e5f0-b43d-4512-ab23-b340b8b97c1d-kube-api-access-mfj6d\") pod \"keystone-58ef-account-create-update-hg8wd\" (UID: \"cb24e5f0-b43d-4512-ab23-b340b8b97c1d\") " pod="openstack/keystone-58ef-account-create-update-hg8wd" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.401058 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-58ef-account-create-update-hg8wd" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.433497 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-c9d84"] Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.436128 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c9d84" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.439581 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce7d0db6-cb2a-46cb-957b-2ec9db253878-operator-scripts\") pod \"placement-41b5-account-create-update-wk7dn\" (UID: \"ce7d0db6-cb2a-46cb-957b-2ec9db253878\") " pod="openstack/placement-41b5-account-create-update-wk7dn" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.439766 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7608666e-e3a7-4b17-ac4e-9fcacb09ccca-operator-scripts\") pod \"placement-db-create-wf7kv\" (UID: \"7608666e-e3a7-4b17-ac4e-9fcacb09ccca\") " pod="openstack/placement-db-create-wf7kv" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.439917 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xvnd\" (UniqueName: \"kubernetes.io/projected/ce7d0db6-cb2a-46cb-957b-2ec9db253878-kube-api-access-7xvnd\") pod \"placement-41b5-account-create-update-wk7dn\" (UID: \"ce7d0db6-cb2a-46cb-957b-2ec9db253878\") " pod="openstack/placement-41b5-account-create-update-wk7dn" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.440045 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc5nm\" (UniqueName: \"kubernetes.io/projected/7608666e-e3a7-4b17-ac4e-9fcacb09ccca-kube-api-access-hc5nm\") pod \"placement-db-create-wf7kv\" (UID: \"7608666e-e3a7-4b17-ac4e-9fcacb09ccca\") " pod="openstack/placement-db-create-wf7kv" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.460994 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-c9d84"] Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.507544 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.510120 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.518780 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.519409 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.520965 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-7mqvn" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.521196 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.526539 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.541479 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce7d0db6-cb2a-46cb-957b-2ec9db253878-operator-scripts\") pod \"placement-41b5-account-create-update-wk7dn\" (UID: \"ce7d0db6-cb2a-46cb-957b-2ec9db253878\") " pod="openstack/placement-41b5-account-create-update-wk7dn" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.541590 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7608666e-e3a7-4b17-ac4e-9fcacb09ccca-operator-scripts\") pod \"placement-db-create-wf7kv\" (UID: \"7608666e-e3a7-4b17-ac4e-9fcacb09ccca\") " pod="openstack/placement-db-create-wf7kv" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.541620 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d6bb0d3-37af-4da6-a806-b276b642fabe-operator-scripts\") pod \"glance-db-create-c9d84\" (UID: \"1d6bb0d3-37af-4da6-a806-b276b642fabe\") " pod="openstack/glance-db-create-c9d84" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.541693 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xvnd\" (UniqueName: \"kubernetes.io/projected/ce7d0db6-cb2a-46cb-957b-2ec9db253878-kube-api-access-7xvnd\") pod \"placement-41b5-account-create-update-wk7dn\" (UID: \"ce7d0db6-cb2a-46cb-957b-2ec9db253878\") " pod="openstack/placement-41b5-account-create-update-wk7dn" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.541727 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc5nm\" (UniqueName: \"kubernetes.io/projected/7608666e-e3a7-4b17-ac4e-9fcacb09ccca-kube-api-access-hc5nm\") pod \"placement-db-create-wf7kv\" (UID: \"7608666e-e3a7-4b17-ac4e-9fcacb09ccca\") " pod="openstack/placement-db-create-wf7kv" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.541757 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vsrp\" (UniqueName: \"kubernetes.io/projected/1d6bb0d3-37af-4da6-a806-b276b642fabe-kube-api-access-6vsrp\") pod \"glance-db-create-c9d84\" (UID: \"1d6bb0d3-37af-4da6-a806-b276b642fabe\") " pod="openstack/glance-db-create-c9d84" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.542457 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce7d0db6-cb2a-46cb-957b-2ec9db253878-operator-scripts\") pod \"placement-41b5-account-create-update-wk7dn\" (UID: \"ce7d0db6-cb2a-46cb-957b-2ec9db253878\") " pod="openstack/placement-41b5-account-create-update-wk7dn" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.542958 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7608666e-e3a7-4b17-ac4e-9fcacb09ccca-operator-scripts\") pod \"placement-db-create-wf7kv\" (UID: \"7608666e-e3a7-4b17-ac4e-9fcacb09ccca\") " pod="openstack/placement-db-create-wf7kv" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.548221 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e29c-account-create-update-wmsz7"] Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.549572 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e29c-account-create-update-wmsz7" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.567333 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc5nm\" (UniqueName: \"kubernetes.io/projected/7608666e-e3a7-4b17-ac4e-9fcacb09ccca-kube-api-access-hc5nm\") pod \"placement-db-create-wf7kv\" (UID: \"7608666e-e3a7-4b17-ac4e-9fcacb09ccca\") " pod="openstack/placement-db-create-wf7kv" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.569844 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.569918 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xvnd\" (UniqueName: \"kubernetes.io/projected/ce7d0db6-cb2a-46cb-957b-2ec9db253878-kube-api-access-7xvnd\") pod \"placement-41b5-account-create-update-wk7dn\" (UID: \"ce7d0db6-cb2a-46cb-957b-2ec9db253878\") " pod="openstack/placement-41b5-account-create-update-wk7dn" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.570164 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wf7kv" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.575096 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e29c-account-create-update-wmsz7"] Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.576384 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-41b5-account-create-update-wk7dn" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.647832 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743899ef-fe87-4dfb-9286-d9a68ade43c6-operator-scripts\") pod \"glance-e29c-account-create-update-wmsz7\" (UID: \"743899ef-fe87-4dfb-9286-d9a68ade43c6\") " pod="openstack/glance-e29c-account-create-update-wmsz7" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.648150 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d6bb0d3-37af-4da6-a806-b276b642fabe-operator-scripts\") pod \"glance-db-create-c9d84\" (UID: \"1d6bb0d3-37af-4da6-a806-b276b642fabe\") " pod="openstack/glance-db-create-c9d84" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.648193 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5687f55-2760-4b17-949f-7a691768ba40-config\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.648209 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwgbd\" (UniqueName: \"kubernetes.io/projected/b5687f55-2760-4b17-949f-7a691768ba40-kube-api-access-vwgbd\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.648249 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5687f55-2760-4b17-949f-7a691768ba40-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.648277 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5687f55-2760-4b17-949f-7a691768ba40-scripts\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.648307 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5687f55-2760-4b17-949f-7a691768ba40-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.648326 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhk2m\" (UniqueName: \"kubernetes.io/projected/743899ef-fe87-4dfb-9286-d9a68ade43c6-kube-api-access-jhk2m\") pod \"glance-e29c-account-create-update-wmsz7\" (UID: \"743899ef-fe87-4dfb-9286-d9a68ade43c6\") " pod="openstack/glance-e29c-account-create-update-wmsz7" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.648450 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b5687f55-2760-4b17-949f-7a691768ba40-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.648475 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5687f55-2760-4b17-949f-7a691768ba40-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.648513 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vsrp\" (UniqueName: \"kubernetes.io/projected/1d6bb0d3-37af-4da6-a806-b276b642fabe-kube-api-access-6vsrp\") pod \"glance-db-create-c9d84\" (UID: \"1d6bb0d3-37af-4da6-a806-b276b642fabe\") " pod="openstack/glance-db-create-c9d84" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.649018 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d6bb0d3-37af-4da6-a806-b276b642fabe-operator-scripts\") pod \"glance-db-create-c9d84\" (UID: \"1d6bb0d3-37af-4da6-a806-b276b642fabe\") " pod="openstack/glance-db-create-c9d84" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.665071 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vsrp\" (UniqueName: \"kubernetes.io/projected/1d6bb0d3-37af-4da6-a806-b276b642fabe-kube-api-access-6vsrp\") pod \"glance-db-create-c9d84\" (UID: \"1d6bb0d3-37af-4da6-a806-b276b642fabe\") " pod="openstack/glance-db-create-c9d84" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.750009 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b5687f55-2760-4b17-949f-7a691768ba40-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.750060 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5687f55-2760-4b17-949f-7a691768ba40-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.750137 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743899ef-fe87-4dfb-9286-d9a68ade43c6-operator-scripts\") pod \"glance-e29c-account-create-update-wmsz7\" (UID: \"743899ef-fe87-4dfb-9286-d9a68ade43c6\") " pod="openstack/glance-e29c-account-create-update-wmsz7" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.750167 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwgbd\" (UniqueName: \"kubernetes.io/projected/b5687f55-2760-4b17-949f-7a691768ba40-kube-api-access-vwgbd\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.750185 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5687f55-2760-4b17-949f-7a691768ba40-config\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.750217 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5687f55-2760-4b17-949f-7a691768ba40-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.750240 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5687f55-2760-4b17-949f-7a691768ba40-scripts\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.750266 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5687f55-2760-4b17-949f-7a691768ba40-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.750284 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhk2m\" (UniqueName: \"kubernetes.io/projected/743899ef-fe87-4dfb-9286-d9a68ade43c6-kube-api-access-jhk2m\") pod \"glance-e29c-account-create-update-wmsz7\" (UID: \"743899ef-fe87-4dfb-9286-d9a68ade43c6\") " pod="openstack/glance-e29c-account-create-update-wmsz7" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.750634 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b5687f55-2760-4b17-949f-7a691768ba40-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.751171 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743899ef-fe87-4dfb-9286-d9a68ade43c6-operator-scripts\") pod \"glance-e29c-account-create-update-wmsz7\" (UID: \"743899ef-fe87-4dfb-9286-d9a68ade43c6\") " pod="openstack/glance-e29c-account-create-update-wmsz7" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.751372 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5687f55-2760-4b17-949f-7a691768ba40-scripts\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.751884 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5687f55-2760-4b17-949f-7a691768ba40-config\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.753712 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5687f55-2760-4b17-949f-7a691768ba40-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.755749 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5687f55-2760-4b17-949f-7a691768ba40-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.758626 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5687f55-2760-4b17-949f-7a691768ba40-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.768635 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhk2m\" (UniqueName: \"kubernetes.io/projected/743899ef-fe87-4dfb-9286-d9a68ade43c6-kube-api-access-jhk2m\") pod \"glance-e29c-account-create-update-wmsz7\" (UID: \"743899ef-fe87-4dfb-9286-d9a68ade43c6\") " pod="openstack/glance-e29c-account-create-update-wmsz7" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.769249 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwgbd\" (UniqueName: \"kubernetes.io/projected/b5687f55-2760-4b17-949f-7a691768ba40-kube-api-access-vwgbd\") pod \"ovn-northd-0\" (UID: \"b5687f55-2760-4b17-949f-7a691768ba40\") " pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.774994 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c9d84" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.847644 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 07 03:49:46 crc kubenswrapper[4980]: I0107 03:49:46.921627 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e29c-account-create-update-wmsz7" Jan 07 03:49:47 crc kubenswrapper[4980]: I0107 03:49:47.469884 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:47 crc kubenswrapper[4980]: E0107 03:49:47.471403 4980 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 07 03:49:47 crc kubenswrapper[4980]: E0107 03:49:47.471464 4980 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 07 03:49:47 crc kubenswrapper[4980]: E0107 03:49:47.471803 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift podName:6b5878bb-8928-4957-a27d-ce18da212460 nodeName:}" failed. No retries permitted until 2026-01-07 03:49:55.471530905 +0000 UTC m=+1042.037225680 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift") pod "swift-storage-0" (UID: "6b5878bb-8928-4957-a27d-ce18da212460") : configmap "swift-ring-files" not found Jan 07 03:49:48 crc kubenswrapper[4980]: I0107 03:49:48.356032 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 07 03:49:48 crc kubenswrapper[4980]: I0107 03:49:48.911701 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:49:49 crc kubenswrapper[4980]: I0107 03:49:49.040001 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fnkbg"] Jan 07 03:49:49 crc kubenswrapper[4980]: I0107 03:49:49.040252 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" podUID="0534e35c-7b58-476b-ba1d-a8b6d91cbcb9" containerName="dnsmasq-dns" containerID="cri-o://28d28f3feea5b8d71496e84bd2b62dbb77529a1c3606839d617eaef94096dbc2" gracePeriod=10 Jan 07 03:49:50 crc kubenswrapper[4980]: I0107 03:49:50.986415 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.053866 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9-config\") pod \"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9\" (UID: \"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9\") " Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.053930 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9-dns-svc\") pod \"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9\" (UID: \"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9\") " Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.053966 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntbnq\" (UniqueName: \"kubernetes.io/projected/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9-kube-api-access-ntbnq\") pod \"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9\" (UID: \"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9\") " Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.057862 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9-kube-api-access-ntbnq" (OuterVolumeSpecName: "kube-api-access-ntbnq") pod "0534e35c-7b58-476b-ba1d-a8b6d91cbcb9" (UID: "0534e35c-7b58-476b-ba1d-a8b6d91cbcb9"). InnerVolumeSpecName "kube-api-access-ntbnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.097026 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9-config" (OuterVolumeSpecName: "config") pod "0534e35c-7b58-476b-ba1d-a8b6d91cbcb9" (UID: "0534e35c-7b58-476b-ba1d-a8b6d91cbcb9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.102229 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0534e35c-7b58-476b-ba1d-a8b6d91cbcb9" (UID: "0534e35c-7b58-476b-ba1d-a8b6d91cbcb9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.154853 4980 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.154878 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntbnq\" (UniqueName: \"kubernetes.io/projected/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9-kube-api-access-ntbnq\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.154890 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.271004 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e29c-account-create-update-wmsz7"] Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.321151 4980 generic.go:334] "Generic (PLEG): container finished" podID="0534e35c-7b58-476b-ba1d-a8b6d91cbcb9" containerID="28d28f3feea5b8d71496e84bd2b62dbb77529a1c3606839d617eaef94096dbc2" exitCode=0 Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.321247 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" event={"ID":"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9","Type":"ContainerDied","Data":"28d28f3feea5b8d71496e84bd2b62dbb77529a1c3606839d617eaef94096dbc2"} Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.321287 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" event={"ID":"0534e35c-7b58-476b-ba1d-a8b6d91cbcb9","Type":"ContainerDied","Data":"9e8db1b39dbfddbee3abc86deffd3d0b60331f8bf3291e64834bd38472009dbb"} Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.321319 4980 scope.go:117] "RemoveContainer" containerID="28d28f3feea5b8d71496e84bd2b62dbb77529a1c3606839d617eaef94096dbc2" Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.321364 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fnkbg" Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.328202 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e29c-account-create-update-wmsz7" event={"ID":"743899ef-fe87-4dfb-9286-d9a68ade43c6","Type":"ContainerStarted","Data":"ab5da54385ee9eeb6af750af66522bf1398246b46da10135f7f1ac2300689522"} Jan 07 03:49:51 crc kubenswrapper[4980]: W0107 03:49:51.336395 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce7d0db6_cb2a_46cb_957b_2ec9db253878.slice/crio-65439f00effaab2d83364989f0bb696bbba0b659b1896b17f34f61ae9aee95d2 WatchSource:0}: Error finding container 65439f00effaab2d83364989f0bb696bbba0b659b1896b17f34f61ae9aee95d2: Status 404 returned error can't find the container with id 65439f00effaab2d83364989f0bb696bbba0b659b1896b17f34f61ae9aee95d2 Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.339766 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-89vfr" event={"ID":"a3566d37-de40-4834-9bbc-48dc6fe7e9c5","Type":"ContainerStarted","Data":"3e30a9bc16f5ef2b2b2c6ec61b08377131af2a3cd39b19da2f41abf6d5e929f9"} Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.348335 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-41b5-account-create-update-wk7dn"] Jan 07 03:49:51 crc kubenswrapper[4980]: W0107 03:49:51.350417 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d6bb0d3_37af_4da6_a806_b276b642fabe.slice/crio-6fab0e9c8b99b5d116f6040f410fff820fbbeaefc732ef77b2a0df2140eecf2c WatchSource:0}: Error finding container 6fab0e9c8b99b5d116f6040f410fff820fbbeaefc732ef77b2a0df2140eecf2c: Status 404 returned error can't find the container with id 6fab0e9c8b99b5d116f6040f410fff820fbbeaefc732ef77b2a0df2140eecf2c Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.373486 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.384386 4980 scope.go:117] "RemoveContainer" containerID="447d7418c016dd2aa54977cb737993aaa434ee967876ff9299e34cb033a1b509" Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.390330 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-c9d84"] Jan 07 03:49:51 crc kubenswrapper[4980]: W0107 03:49:51.395667 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb24e5f0_b43d_4512_ab23_b340b8b97c1d.slice/crio-ed544160e8eb115758fc604af26aa50b1cdac19bb1716dbf08da6756ffd5ab98 WatchSource:0}: Error finding container ed544160e8eb115758fc604af26aa50b1cdac19bb1716dbf08da6756ffd5ab98: Status 404 returned error can't find the container with id ed544160e8eb115758fc604af26aa50b1cdac19bb1716dbf08da6756ffd5ab98 Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.399264 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-58ef-account-create-update-hg8wd"] Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.400107 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-89vfr" podStartSLOduration=1.865556432 podStartE2EDuration="8.400088479s" podCreationTimestamp="2026-01-07 03:49:43 +0000 UTC" firstStartedPulling="2026-01-07 03:49:44.189211568 +0000 UTC m=+1030.754906303" lastFinishedPulling="2026-01-07 03:49:50.723743605 +0000 UTC m=+1037.289438350" observedRunningTime="2026-01-07 03:49:51.365278099 +0000 UTC m=+1037.930972834" watchObservedRunningTime="2026-01-07 03:49:51.400088479 +0000 UTC m=+1037.965783214" Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.424744 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fnkbg"] Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.428822 4980 scope.go:117] "RemoveContainer" containerID="28d28f3feea5b8d71496e84bd2b62dbb77529a1c3606839d617eaef94096dbc2" Jan 07 03:49:51 crc kubenswrapper[4980]: E0107 03:49:51.429294 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28d28f3feea5b8d71496e84bd2b62dbb77529a1c3606839d617eaef94096dbc2\": container with ID starting with 28d28f3feea5b8d71496e84bd2b62dbb77529a1c3606839d617eaef94096dbc2 not found: ID does not exist" containerID="28d28f3feea5b8d71496e84bd2b62dbb77529a1c3606839d617eaef94096dbc2" Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.429327 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d28f3feea5b8d71496e84bd2b62dbb77529a1c3606839d617eaef94096dbc2"} err="failed to get container status \"28d28f3feea5b8d71496e84bd2b62dbb77529a1c3606839d617eaef94096dbc2\": rpc error: code = NotFound desc = could not find container \"28d28f3feea5b8d71496e84bd2b62dbb77529a1c3606839d617eaef94096dbc2\": container with ID starting with 28d28f3feea5b8d71496e84bd2b62dbb77529a1c3606839d617eaef94096dbc2 not found: ID does not exist" Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.429350 4980 scope.go:117] "RemoveContainer" containerID="447d7418c016dd2aa54977cb737993aaa434ee967876ff9299e34cb033a1b509" Jan 07 03:49:51 crc kubenswrapper[4980]: E0107 03:49:51.435660 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"447d7418c016dd2aa54977cb737993aaa434ee967876ff9299e34cb033a1b509\": container with ID starting with 447d7418c016dd2aa54977cb737993aaa434ee967876ff9299e34cb033a1b509 not found: ID does not exist" containerID="447d7418c016dd2aa54977cb737993aaa434ee967876ff9299e34cb033a1b509" Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.435709 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"447d7418c016dd2aa54977cb737993aaa434ee967876ff9299e34cb033a1b509"} err="failed to get container status \"447d7418c016dd2aa54977cb737993aaa434ee967876ff9299e34cb033a1b509\": rpc error: code = NotFound desc = could not find container \"447d7418c016dd2aa54977cb737993aaa434ee967876ff9299e34cb033a1b509\": container with ID starting with 447d7418c016dd2aa54977cb737993aaa434ee967876ff9299e34cb033a1b509 not found: ID does not exist" Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.443259 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fnkbg"] Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.521094 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pcrf5"] Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.532915 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wf7kv"] Jan 07 03:49:51 crc kubenswrapper[4980]: W0107 03:49:51.546020 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8030272c_dd7c_4eb4_822f_29fdff143d62.slice/crio-fa04ae7ab051accaec523cb96aca30c1205ad882c92994b8ef12f58d37ac5a82 WatchSource:0}: Error finding container fa04ae7ab051accaec523cb96aca30c1205ad882c92994b8ef12f58d37ac5a82: Status 404 returned error can't find the container with id fa04ae7ab051accaec523cb96aca30c1205ad882c92994b8ef12f58d37ac5a82 Jan 07 03:49:51 crc kubenswrapper[4980]: W0107 03:49:51.547270 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7608666e_e3a7_4b17_ac4e_9fcacb09ccca.slice/crio-dab3bc7e3ae310af3417d2ec28ef51a521c15b8d661adce5e4b2cb12521932be WatchSource:0}: Error finding container dab3bc7e3ae310af3417d2ec28ef51a521c15b8d661adce5e4b2cb12521932be: Status 404 returned error can't find the container with id dab3bc7e3ae310af3417d2ec28ef51a521c15b8d661adce5e4b2cb12521932be Jan 07 03:49:51 crc kubenswrapper[4980]: I0107 03:49:51.749681 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0534e35c-7b58-476b-ba1d-a8b6d91cbcb9" path="/var/lib/kubelet/pods/0534e35c-7b58-476b-ba1d-a8b6d91cbcb9/volumes" Jan 07 03:49:52 crc kubenswrapper[4980]: I0107 03:49:52.355223 4980 generic.go:334] "Generic (PLEG): container finished" podID="ce7d0db6-cb2a-46cb-957b-2ec9db253878" containerID="b8c1359660a18df61abcf12b96cdf5c5566a95c78eb330ddf61adccccb82c95a" exitCode=0 Jan 07 03:49:52 crc kubenswrapper[4980]: I0107 03:49:52.355348 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-41b5-account-create-update-wk7dn" event={"ID":"ce7d0db6-cb2a-46cb-957b-2ec9db253878","Type":"ContainerDied","Data":"b8c1359660a18df61abcf12b96cdf5c5566a95c78eb330ddf61adccccb82c95a"} Jan 07 03:49:52 crc kubenswrapper[4980]: I0107 03:49:52.355400 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-41b5-account-create-update-wk7dn" event={"ID":"ce7d0db6-cb2a-46cb-957b-2ec9db253878","Type":"ContainerStarted","Data":"65439f00effaab2d83364989f0bb696bbba0b659b1896b17f34f61ae9aee95d2"} Jan 07 03:49:52 crc kubenswrapper[4980]: I0107 03:49:52.357693 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pcrf5" event={"ID":"8030272c-dd7c-4eb4-822f-29fdff143d62","Type":"ContainerDied","Data":"35bddfeec114506eaa0c6f8cbbc19b864e3ca187252f52e611dde2e2aafaad89"} Jan 07 03:49:52 crc kubenswrapper[4980]: I0107 03:49:52.357538 4980 generic.go:334] "Generic (PLEG): container finished" podID="8030272c-dd7c-4eb4-822f-29fdff143d62" containerID="35bddfeec114506eaa0c6f8cbbc19b864e3ca187252f52e611dde2e2aafaad89" exitCode=0 Jan 07 03:49:52 crc kubenswrapper[4980]: I0107 03:49:52.358591 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pcrf5" event={"ID":"8030272c-dd7c-4eb4-822f-29fdff143d62","Type":"ContainerStarted","Data":"fa04ae7ab051accaec523cb96aca30c1205ad882c92994b8ef12f58d37ac5a82"} Jan 07 03:49:52 crc kubenswrapper[4980]: I0107 03:49:52.361842 4980 generic.go:334] "Generic (PLEG): container finished" podID="7608666e-e3a7-4b17-ac4e-9fcacb09ccca" containerID="959143afb9e7179dff2e94c6d490634843d2e22e8625aa5ec54479ba41950dae" exitCode=0 Jan 07 03:49:52 crc kubenswrapper[4980]: I0107 03:49:52.361923 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wf7kv" event={"ID":"7608666e-e3a7-4b17-ac4e-9fcacb09ccca","Type":"ContainerDied","Data":"959143afb9e7179dff2e94c6d490634843d2e22e8625aa5ec54479ba41950dae"} Jan 07 03:49:52 crc kubenswrapper[4980]: I0107 03:49:52.361948 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wf7kv" event={"ID":"7608666e-e3a7-4b17-ac4e-9fcacb09ccca","Type":"ContainerStarted","Data":"dab3bc7e3ae310af3417d2ec28ef51a521c15b8d661adce5e4b2cb12521932be"} Jan 07 03:49:52 crc kubenswrapper[4980]: I0107 03:49:52.363498 4980 generic.go:334] "Generic (PLEG): container finished" podID="1d6bb0d3-37af-4da6-a806-b276b642fabe" containerID="bcae7e3831f50d1b690aec5bdb3e624ee460003689f9c829efbf74ac0a85ba79" exitCode=0 Jan 07 03:49:52 crc kubenswrapper[4980]: I0107 03:49:52.363547 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c9d84" event={"ID":"1d6bb0d3-37af-4da6-a806-b276b642fabe","Type":"ContainerDied","Data":"bcae7e3831f50d1b690aec5bdb3e624ee460003689f9c829efbf74ac0a85ba79"} Jan 07 03:49:52 crc kubenswrapper[4980]: I0107 03:49:52.363576 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c9d84" event={"ID":"1d6bb0d3-37af-4da6-a806-b276b642fabe","Type":"ContainerStarted","Data":"6fab0e9c8b99b5d116f6040f410fff820fbbeaefc732ef77b2a0df2140eecf2c"} Jan 07 03:49:52 crc kubenswrapper[4980]: I0107 03:49:52.365470 4980 generic.go:334] "Generic (PLEG): container finished" podID="743899ef-fe87-4dfb-9286-d9a68ade43c6" containerID="2d675c963fdc9291c38b465e0745ca435a711a3deacc665a0f9d227ea8d6eab5" exitCode=0 Jan 07 03:49:52 crc kubenswrapper[4980]: I0107 03:49:52.365563 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e29c-account-create-update-wmsz7" event={"ID":"743899ef-fe87-4dfb-9286-d9a68ade43c6","Type":"ContainerDied","Data":"2d675c963fdc9291c38b465e0745ca435a711a3deacc665a0f9d227ea8d6eab5"} Jan 07 03:49:52 crc kubenswrapper[4980]: I0107 03:49:52.368173 4980 generic.go:334] "Generic (PLEG): container finished" podID="cb24e5f0-b43d-4512-ab23-b340b8b97c1d" containerID="3fcf3512b3a684e46f61960dbfe9f67cf3ae98696d92d7be15eff3a5d32f0ac8" exitCode=0 Jan 07 03:49:52 crc kubenswrapper[4980]: I0107 03:49:52.368209 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-58ef-account-create-update-hg8wd" event={"ID":"cb24e5f0-b43d-4512-ab23-b340b8b97c1d","Type":"ContainerDied","Data":"3fcf3512b3a684e46f61960dbfe9f67cf3ae98696d92d7be15eff3a5d32f0ac8"} Jan 07 03:49:52 crc kubenswrapper[4980]: I0107 03:49:52.368253 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-58ef-account-create-update-hg8wd" event={"ID":"cb24e5f0-b43d-4512-ab23-b340b8b97c1d","Type":"ContainerStarted","Data":"ed544160e8eb115758fc604af26aa50b1cdac19bb1716dbf08da6756ffd5ab98"} Jan 07 03:49:52 crc kubenswrapper[4980]: I0107 03:49:52.370867 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b5687f55-2760-4b17-949f-7a691768ba40","Type":"ContainerStarted","Data":"c8c101704cfa2a6a3ccb431d29d2326b332b7021a3858d1ffc1c3b645937322c"} Jan 07 03:49:53 crc kubenswrapper[4980]: I0107 03:49:53.385385 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b5687f55-2760-4b17-949f-7a691768ba40","Type":"ContainerStarted","Data":"7d290dad4bbcfbbbaf75bf99ff008a2fa4d4669ae530ef6eb4b165e16fefd7b2"} Jan 07 03:49:53 crc kubenswrapper[4980]: I0107 03:49:53.859502 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-58ef-account-create-update-hg8wd" Jan 07 03:49:53 crc kubenswrapper[4980]: I0107 03:49:53.916483 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfj6d\" (UniqueName: \"kubernetes.io/projected/cb24e5f0-b43d-4512-ab23-b340b8b97c1d-kube-api-access-mfj6d\") pod \"cb24e5f0-b43d-4512-ab23-b340b8b97c1d\" (UID: \"cb24e5f0-b43d-4512-ab23-b340b8b97c1d\") " Jan 07 03:49:53 crc kubenswrapper[4980]: I0107 03:49:53.916688 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb24e5f0-b43d-4512-ab23-b340b8b97c1d-operator-scripts\") pod \"cb24e5f0-b43d-4512-ab23-b340b8b97c1d\" (UID: \"cb24e5f0-b43d-4512-ab23-b340b8b97c1d\") " Jan 07 03:49:53 crc kubenswrapper[4980]: I0107 03:49:53.918745 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb24e5f0-b43d-4512-ab23-b340b8b97c1d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb24e5f0-b43d-4512-ab23-b340b8b97c1d" (UID: "cb24e5f0-b43d-4512-ab23-b340b8b97c1d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:53 crc kubenswrapper[4980]: I0107 03:49:53.927740 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb24e5f0-b43d-4512-ab23-b340b8b97c1d-kube-api-access-mfj6d" (OuterVolumeSpecName: "kube-api-access-mfj6d") pod "cb24e5f0-b43d-4512-ab23-b340b8b97c1d" (UID: "cb24e5f0-b43d-4512-ab23-b340b8b97c1d"). InnerVolumeSpecName "kube-api-access-mfj6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.020123 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfj6d\" (UniqueName: \"kubernetes.io/projected/cb24e5f0-b43d-4512-ab23-b340b8b97c1d-kube-api-access-mfj6d\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.020149 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb24e5f0-b43d-4512-ab23-b340b8b97c1d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.055717 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-41b5-account-create-update-wk7dn" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.059214 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c9d84" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.068415 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pcrf5" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.077986 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wf7kv" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.087314 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e29c-account-create-update-wmsz7" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.121117 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xvnd\" (UniqueName: \"kubernetes.io/projected/ce7d0db6-cb2a-46cb-957b-2ec9db253878-kube-api-access-7xvnd\") pod \"ce7d0db6-cb2a-46cb-957b-2ec9db253878\" (UID: \"ce7d0db6-cb2a-46cb-957b-2ec9db253878\") " Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.121197 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhk2m\" (UniqueName: \"kubernetes.io/projected/743899ef-fe87-4dfb-9286-d9a68ade43c6-kube-api-access-jhk2m\") pod \"743899ef-fe87-4dfb-9286-d9a68ade43c6\" (UID: \"743899ef-fe87-4dfb-9286-d9a68ade43c6\") " Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.121229 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743899ef-fe87-4dfb-9286-d9a68ade43c6-operator-scripts\") pod \"743899ef-fe87-4dfb-9286-d9a68ade43c6\" (UID: \"743899ef-fe87-4dfb-9286-d9a68ade43c6\") " Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.121272 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vsrp\" (UniqueName: \"kubernetes.io/projected/1d6bb0d3-37af-4da6-a806-b276b642fabe-kube-api-access-6vsrp\") pod \"1d6bb0d3-37af-4da6-a806-b276b642fabe\" (UID: \"1d6bb0d3-37af-4da6-a806-b276b642fabe\") " Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.121314 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7608666e-e3a7-4b17-ac4e-9fcacb09ccca-operator-scripts\") pod \"7608666e-e3a7-4b17-ac4e-9fcacb09ccca\" (UID: \"7608666e-e3a7-4b17-ac4e-9fcacb09ccca\") " Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.121340 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc5nm\" (UniqueName: \"kubernetes.io/projected/7608666e-e3a7-4b17-ac4e-9fcacb09ccca-kube-api-access-hc5nm\") pod \"7608666e-e3a7-4b17-ac4e-9fcacb09ccca\" (UID: \"7608666e-e3a7-4b17-ac4e-9fcacb09ccca\") " Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.121396 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d6bb0d3-37af-4da6-a806-b276b642fabe-operator-scripts\") pod \"1d6bb0d3-37af-4da6-a806-b276b642fabe\" (UID: \"1d6bb0d3-37af-4da6-a806-b276b642fabe\") " Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.121435 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gkkk\" (UniqueName: \"kubernetes.io/projected/8030272c-dd7c-4eb4-822f-29fdff143d62-kube-api-access-5gkkk\") pod \"8030272c-dd7c-4eb4-822f-29fdff143d62\" (UID: \"8030272c-dd7c-4eb4-822f-29fdff143d62\") " Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.121499 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8030272c-dd7c-4eb4-822f-29fdff143d62-operator-scripts\") pod \"8030272c-dd7c-4eb4-822f-29fdff143d62\" (UID: \"8030272c-dd7c-4eb4-822f-29fdff143d62\") " Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.121615 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce7d0db6-cb2a-46cb-957b-2ec9db253878-operator-scripts\") pod \"ce7d0db6-cb2a-46cb-957b-2ec9db253878\" (UID: \"ce7d0db6-cb2a-46cb-957b-2ec9db253878\") " Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.123342 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/743899ef-fe87-4dfb-9286-d9a68ade43c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "743899ef-fe87-4dfb-9286-d9a68ade43c6" (UID: "743899ef-fe87-4dfb-9286-d9a68ade43c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.123699 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d6bb0d3-37af-4da6-a806-b276b642fabe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d6bb0d3-37af-4da6-a806-b276b642fabe" (UID: "1d6bb0d3-37af-4da6-a806-b276b642fabe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.123883 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8030272c-dd7c-4eb4-822f-29fdff143d62-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8030272c-dd7c-4eb4-822f-29fdff143d62" (UID: "8030272c-dd7c-4eb4-822f-29fdff143d62"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.123905 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7608666e-e3a7-4b17-ac4e-9fcacb09ccca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7608666e-e3a7-4b17-ac4e-9fcacb09ccca" (UID: "7608666e-e3a7-4b17-ac4e-9fcacb09ccca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.125758 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743899ef-fe87-4dfb-9286-d9a68ade43c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.125795 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7608666e-e3a7-4b17-ac4e-9fcacb09ccca-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.125810 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d6bb0d3-37af-4da6-a806-b276b642fabe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.125832 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8030272c-dd7c-4eb4-822f-29fdff143d62-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.126615 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce7d0db6-cb2a-46cb-957b-2ec9db253878-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce7d0db6-cb2a-46cb-957b-2ec9db253878" (UID: "ce7d0db6-cb2a-46cb-957b-2ec9db253878"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.126913 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d6bb0d3-37af-4da6-a806-b276b642fabe-kube-api-access-6vsrp" (OuterVolumeSpecName: "kube-api-access-6vsrp") pod "1d6bb0d3-37af-4da6-a806-b276b642fabe" (UID: "1d6bb0d3-37af-4da6-a806-b276b642fabe"). InnerVolumeSpecName "kube-api-access-6vsrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.129437 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/743899ef-fe87-4dfb-9286-d9a68ade43c6-kube-api-access-jhk2m" (OuterVolumeSpecName: "kube-api-access-jhk2m") pod "743899ef-fe87-4dfb-9286-d9a68ade43c6" (UID: "743899ef-fe87-4dfb-9286-d9a68ade43c6"). InnerVolumeSpecName "kube-api-access-jhk2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.129500 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7608666e-e3a7-4b17-ac4e-9fcacb09ccca-kube-api-access-hc5nm" (OuterVolumeSpecName: "kube-api-access-hc5nm") pod "7608666e-e3a7-4b17-ac4e-9fcacb09ccca" (UID: "7608666e-e3a7-4b17-ac4e-9fcacb09ccca"). InnerVolumeSpecName "kube-api-access-hc5nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.132789 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8030272c-dd7c-4eb4-822f-29fdff143d62-kube-api-access-5gkkk" (OuterVolumeSpecName: "kube-api-access-5gkkk") pod "8030272c-dd7c-4eb4-822f-29fdff143d62" (UID: "8030272c-dd7c-4eb4-822f-29fdff143d62"). InnerVolumeSpecName "kube-api-access-5gkkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.153874 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7d0db6-cb2a-46cb-957b-2ec9db253878-kube-api-access-7xvnd" (OuterVolumeSpecName: "kube-api-access-7xvnd") pod "ce7d0db6-cb2a-46cb-957b-2ec9db253878" (UID: "ce7d0db6-cb2a-46cb-957b-2ec9db253878"). InnerVolumeSpecName "kube-api-access-7xvnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.227917 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc5nm\" (UniqueName: \"kubernetes.io/projected/7608666e-e3a7-4b17-ac4e-9fcacb09ccca-kube-api-access-hc5nm\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.228375 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gkkk\" (UniqueName: \"kubernetes.io/projected/8030272c-dd7c-4eb4-822f-29fdff143d62-kube-api-access-5gkkk\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.228440 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce7d0db6-cb2a-46cb-957b-2ec9db253878-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.228504 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xvnd\" (UniqueName: \"kubernetes.io/projected/ce7d0db6-cb2a-46cb-957b-2ec9db253878-kube-api-access-7xvnd\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.228585 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhk2m\" (UniqueName: \"kubernetes.io/projected/743899ef-fe87-4dfb-9286-d9a68ade43c6-kube-api-access-jhk2m\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.228643 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vsrp\" (UniqueName: \"kubernetes.io/projected/1d6bb0d3-37af-4da6-a806-b276b642fabe-kube-api-access-6vsrp\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.395878 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e29c-account-create-update-wmsz7" event={"ID":"743899ef-fe87-4dfb-9286-d9a68ade43c6","Type":"ContainerDied","Data":"ab5da54385ee9eeb6af750af66522bf1398246b46da10135f7f1ac2300689522"} Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.396315 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab5da54385ee9eeb6af750af66522bf1398246b46da10135f7f1ac2300689522" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.396130 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e29c-account-create-update-wmsz7" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.398021 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-58ef-account-create-update-hg8wd" event={"ID":"cb24e5f0-b43d-4512-ab23-b340b8b97c1d","Type":"ContainerDied","Data":"ed544160e8eb115758fc604af26aa50b1cdac19bb1716dbf08da6756ffd5ab98"} Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.398103 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed544160e8eb115758fc604af26aa50b1cdac19bb1716dbf08da6756ffd5ab98" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.398247 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-58ef-account-create-update-hg8wd" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.403424 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-41b5-account-create-update-wk7dn" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.403475 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-41b5-account-create-update-wk7dn" event={"ID":"ce7d0db6-cb2a-46cb-957b-2ec9db253878","Type":"ContainerDied","Data":"65439f00effaab2d83364989f0bb696bbba0b659b1896b17f34f61ae9aee95d2"} Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.403550 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65439f00effaab2d83364989f0bb696bbba0b659b1896b17f34f61ae9aee95d2" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.410262 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pcrf5" event={"ID":"8030272c-dd7c-4eb4-822f-29fdff143d62","Type":"ContainerDied","Data":"fa04ae7ab051accaec523cb96aca30c1205ad882c92994b8ef12f58d37ac5a82"} Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.410312 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa04ae7ab051accaec523cb96aca30c1205ad882c92994b8ef12f58d37ac5a82" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.410310 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pcrf5" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.411886 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wf7kv" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.411890 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wf7kv" event={"ID":"7608666e-e3a7-4b17-ac4e-9fcacb09ccca","Type":"ContainerDied","Data":"dab3bc7e3ae310af3417d2ec28ef51a521c15b8d661adce5e4b2cb12521932be"} Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.412198 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dab3bc7e3ae310af3417d2ec28ef51a521c15b8d661adce5e4b2cb12521932be" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.413247 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c9d84" event={"ID":"1d6bb0d3-37af-4da6-a806-b276b642fabe","Type":"ContainerDied","Data":"6fab0e9c8b99b5d116f6040f410fff820fbbeaefc732ef77b2a0df2140eecf2c"} Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.413286 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fab0e9c8b99b5d116f6040f410fff820fbbeaefc732ef77b2a0df2140eecf2c" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.413317 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c9d84" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.463028 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hxhcv"] Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.466265 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hxhcv"] Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.541218 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pfwbk"] Jan 07 03:49:54 crc kubenswrapper[4980]: E0107 03:49:54.541541 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0534e35c-7b58-476b-ba1d-a8b6d91cbcb9" containerName="dnsmasq-dns" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.541629 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="0534e35c-7b58-476b-ba1d-a8b6d91cbcb9" containerName="dnsmasq-dns" Jan 07 03:49:54 crc kubenswrapper[4980]: E0107 03:49:54.541639 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7d0db6-cb2a-46cb-957b-2ec9db253878" containerName="mariadb-account-create-update" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.541647 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7d0db6-cb2a-46cb-957b-2ec9db253878" containerName="mariadb-account-create-update" Jan 07 03:49:54 crc kubenswrapper[4980]: E0107 03:49:54.541667 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb24e5f0-b43d-4512-ab23-b340b8b97c1d" containerName="mariadb-account-create-update" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.541674 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb24e5f0-b43d-4512-ab23-b340b8b97c1d" containerName="mariadb-account-create-update" Jan 07 03:49:54 crc kubenswrapper[4980]: E0107 03:49:54.541686 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8030272c-dd7c-4eb4-822f-29fdff143d62" containerName="mariadb-database-create" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.541691 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8030272c-dd7c-4eb4-822f-29fdff143d62" containerName="mariadb-database-create" Jan 07 03:49:54 crc kubenswrapper[4980]: E0107 03:49:54.541704 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0534e35c-7b58-476b-ba1d-a8b6d91cbcb9" containerName="init" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.541710 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="0534e35c-7b58-476b-ba1d-a8b6d91cbcb9" containerName="init" Jan 07 03:49:54 crc kubenswrapper[4980]: E0107 03:49:54.541719 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7608666e-e3a7-4b17-ac4e-9fcacb09ccca" containerName="mariadb-database-create" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.541725 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="7608666e-e3a7-4b17-ac4e-9fcacb09ccca" containerName="mariadb-database-create" Jan 07 03:49:54 crc kubenswrapper[4980]: E0107 03:49:54.541734 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743899ef-fe87-4dfb-9286-d9a68ade43c6" containerName="mariadb-account-create-update" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.541740 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="743899ef-fe87-4dfb-9286-d9a68ade43c6" containerName="mariadb-account-create-update" Jan 07 03:49:54 crc kubenswrapper[4980]: E0107 03:49:54.541750 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d6bb0d3-37af-4da6-a806-b276b642fabe" containerName="mariadb-database-create" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.541755 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d6bb0d3-37af-4da6-a806-b276b642fabe" containerName="mariadb-database-create" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.541937 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d6bb0d3-37af-4da6-a806-b276b642fabe" containerName="mariadb-database-create" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.541953 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb24e5f0-b43d-4512-ab23-b340b8b97c1d" containerName="mariadb-account-create-update" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.541969 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="743899ef-fe87-4dfb-9286-d9a68ade43c6" containerName="mariadb-account-create-update" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.541983 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="7608666e-e3a7-4b17-ac4e-9fcacb09ccca" containerName="mariadb-database-create" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.541996 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="0534e35c-7b58-476b-ba1d-a8b6d91cbcb9" containerName="dnsmasq-dns" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.542005 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7d0db6-cb2a-46cb-957b-2ec9db253878" containerName="mariadb-account-create-update" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.542014 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="8030272c-dd7c-4eb4-822f-29fdff143d62" containerName="mariadb-database-create" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.542512 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pfwbk" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.543986 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.550245 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pfwbk"] Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.645092 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f38553ab-2b7d-486b-9290-161ef0dd23b3-operator-scripts\") pod \"root-account-create-update-pfwbk\" (UID: \"f38553ab-2b7d-486b-9290-161ef0dd23b3\") " pod="openstack/root-account-create-update-pfwbk" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.645221 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2tph\" (UniqueName: \"kubernetes.io/projected/f38553ab-2b7d-486b-9290-161ef0dd23b3-kube-api-access-v2tph\") pod \"root-account-create-update-pfwbk\" (UID: \"f38553ab-2b7d-486b-9290-161ef0dd23b3\") " pod="openstack/root-account-create-update-pfwbk" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.747035 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f38553ab-2b7d-486b-9290-161ef0dd23b3-operator-scripts\") pod \"root-account-create-update-pfwbk\" (UID: \"f38553ab-2b7d-486b-9290-161ef0dd23b3\") " pod="openstack/root-account-create-update-pfwbk" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.747165 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2tph\" (UniqueName: \"kubernetes.io/projected/f38553ab-2b7d-486b-9290-161ef0dd23b3-kube-api-access-v2tph\") pod \"root-account-create-update-pfwbk\" (UID: \"f38553ab-2b7d-486b-9290-161ef0dd23b3\") " pod="openstack/root-account-create-update-pfwbk" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.747811 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f38553ab-2b7d-486b-9290-161ef0dd23b3-operator-scripts\") pod \"root-account-create-update-pfwbk\" (UID: \"f38553ab-2b7d-486b-9290-161ef0dd23b3\") " pod="openstack/root-account-create-update-pfwbk" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.776416 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2tph\" (UniqueName: \"kubernetes.io/projected/f38553ab-2b7d-486b-9290-161ef0dd23b3-kube-api-access-v2tph\") pod \"root-account-create-update-pfwbk\" (UID: \"f38553ab-2b7d-486b-9290-161ef0dd23b3\") " pod="openstack/root-account-create-update-pfwbk" Jan 07 03:49:54 crc kubenswrapper[4980]: I0107 03:49:54.914725 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pfwbk" Jan 07 03:49:55 crc kubenswrapper[4980]: I0107 03:49:55.424596 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b5687f55-2760-4b17-949f-7a691768ba40","Type":"ContainerStarted","Data":"eb05b2aeaab03360273c9aff952aab74bdae4f95b7a53933c206fbfad452ee2f"} Jan 07 03:49:55 crc kubenswrapper[4980]: I0107 03:49:55.440034 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pfwbk"] Jan 07 03:49:55 crc kubenswrapper[4980]: I0107 03:49:55.566084 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:49:55 crc kubenswrapper[4980]: E0107 03:49:55.566233 4980 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 07 03:49:55 crc kubenswrapper[4980]: E0107 03:49:55.566590 4980 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 07 03:49:55 crc kubenswrapper[4980]: E0107 03:49:55.566680 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift podName:6b5878bb-8928-4957-a27d-ce18da212460 nodeName:}" failed. No retries permitted until 2026-01-07 03:50:11.566659521 +0000 UTC m=+1058.132354256 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift") pod "swift-storage-0" (UID: "6b5878bb-8928-4957-a27d-ce18da212460") : configmap "swift-ring-files" not found Jan 07 03:49:55 crc kubenswrapper[4980]: I0107 03:49:55.747860 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5df09f52-8104-48c9-940a-9d0379637acc" path="/var/lib/kubelet/pods/5df09f52-8104-48c9-940a-9d0379637acc/volumes" Jan 07 03:49:56 crc kubenswrapper[4980]: I0107 03:49:56.432365 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pfwbk" event={"ID":"f38553ab-2b7d-486b-9290-161ef0dd23b3","Type":"ContainerStarted","Data":"f688f3372fa4d8c9bff711a308dc77267795e3401909f2c964e1991f0fabd752"} Jan 07 03:49:56 crc kubenswrapper[4980]: I0107 03:49:56.432413 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pfwbk" event={"ID":"f38553ab-2b7d-486b-9290-161ef0dd23b3","Type":"ContainerStarted","Data":"123877c84bc86af958faf56a776450c80535cf8b83cc6791d4393212405d6ce0"} Jan 07 03:49:56 crc kubenswrapper[4980]: I0107 03:49:56.432487 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 07 03:49:56 crc kubenswrapper[4980]: I0107 03:49:56.462791 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-pfwbk" podStartSLOduration=2.462765015 podStartE2EDuration="2.462765015s" podCreationTimestamp="2026-01-07 03:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:49:56.461906168 +0000 UTC m=+1043.027600943" watchObservedRunningTime="2026-01-07 03:49:56.462765015 +0000 UTC m=+1043.028459810" Jan 07 03:49:56 crc kubenswrapper[4980]: I0107 03:49:56.505303 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=9.212441693 podStartE2EDuration="10.505282401s" podCreationTimestamp="2026-01-07 03:49:46 +0000 UTC" firstStartedPulling="2026-01-07 03:49:51.417951394 +0000 UTC m=+1037.983646139" lastFinishedPulling="2026-01-07 03:49:52.710792112 +0000 UTC m=+1039.276486847" observedRunningTime="2026-01-07 03:49:56.49110578 +0000 UTC m=+1043.056800525" watchObservedRunningTime="2026-01-07 03:49:56.505282401 +0000 UTC m=+1043.070977146" Jan 07 03:49:56 crc kubenswrapper[4980]: I0107 03:49:56.757354 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-d9v7c"] Jan 07 03:49:56 crc kubenswrapper[4980]: I0107 03:49:56.758743 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d9v7c" Jan 07 03:49:56 crc kubenswrapper[4980]: I0107 03:49:56.762926 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-d7gzb" Jan 07 03:49:56 crc kubenswrapper[4980]: I0107 03:49:56.773761 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 07 03:49:56 crc kubenswrapper[4980]: I0107 03:49:56.780378 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-d9v7c"] Jan 07 03:49:56 crc kubenswrapper[4980]: I0107 03:49:56.888431 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzll2\" (UniqueName: \"kubernetes.io/projected/a87a557a-be2d-477a-afe4-315cd7b49f9a-kube-api-access-vzll2\") pod \"glance-db-sync-d9v7c\" (UID: \"a87a557a-be2d-477a-afe4-315cd7b49f9a\") " pod="openstack/glance-db-sync-d9v7c" Jan 07 03:49:56 crc kubenswrapper[4980]: I0107 03:49:56.888500 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a87a557a-be2d-477a-afe4-315cd7b49f9a-db-sync-config-data\") pod \"glance-db-sync-d9v7c\" (UID: \"a87a557a-be2d-477a-afe4-315cd7b49f9a\") " pod="openstack/glance-db-sync-d9v7c" Jan 07 03:49:56 crc kubenswrapper[4980]: I0107 03:49:56.888531 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a87a557a-be2d-477a-afe4-315cd7b49f9a-config-data\") pod \"glance-db-sync-d9v7c\" (UID: \"a87a557a-be2d-477a-afe4-315cd7b49f9a\") " pod="openstack/glance-db-sync-d9v7c" Jan 07 03:49:56 crc kubenswrapper[4980]: I0107 03:49:56.889980 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a87a557a-be2d-477a-afe4-315cd7b49f9a-combined-ca-bundle\") pod \"glance-db-sync-d9v7c\" (UID: \"a87a557a-be2d-477a-afe4-315cd7b49f9a\") " pod="openstack/glance-db-sync-d9v7c" Jan 07 03:49:56 crc kubenswrapper[4980]: I0107 03:49:56.992478 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a87a557a-be2d-477a-afe4-315cd7b49f9a-combined-ca-bundle\") pod \"glance-db-sync-d9v7c\" (UID: \"a87a557a-be2d-477a-afe4-315cd7b49f9a\") " pod="openstack/glance-db-sync-d9v7c" Jan 07 03:49:56 crc kubenswrapper[4980]: I0107 03:49:56.992685 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzll2\" (UniqueName: \"kubernetes.io/projected/a87a557a-be2d-477a-afe4-315cd7b49f9a-kube-api-access-vzll2\") pod \"glance-db-sync-d9v7c\" (UID: \"a87a557a-be2d-477a-afe4-315cd7b49f9a\") " pod="openstack/glance-db-sync-d9v7c" Jan 07 03:49:56 crc kubenswrapper[4980]: I0107 03:49:56.992742 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a87a557a-be2d-477a-afe4-315cd7b49f9a-db-sync-config-data\") pod \"glance-db-sync-d9v7c\" (UID: \"a87a557a-be2d-477a-afe4-315cd7b49f9a\") " pod="openstack/glance-db-sync-d9v7c" Jan 07 03:49:56 crc kubenswrapper[4980]: I0107 03:49:56.992801 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a87a557a-be2d-477a-afe4-315cd7b49f9a-config-data\") pod \"glance-db-sync-d9v7c\" (UID: \"a87a557a-be2d-477a-afe4-315cd7b49f9a\") " pod="openstack/glance-db-sync-d9v7c" Jan 07 03:49:57 crc kubenswrapper[4980]: I0107 03:49:57.004506 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a87a557a-be2d-477a-afe4-315cd7b49f9a-config-data\") pod \"glance-db-sync-d9v7c\" (UID: \"a87a557a-be2d-477a-afe4-315cd7b49f9a\") " pod="openstack/glance-db-sync-d9v7c" Jan 07 03:49:57 crc kubenswrapper[4980]: I0107 03:49:57.006524 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a87a557a-be2d-477a-afe4-315cd7b49f9a-combined-ca-bundle\") pod \"glance-db-sync-d9v7c\" (UID: \"a87a557a-be2d-477a-afe4-315cd7b49f9a\") " pod="openstack/glance-db-sync-d9v7c" Jan 07 03:49:57 crc kubenswrapper[4980]: I0107 03:49:57.007095 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a87a557a-be2d-477a-afe4-315cd7b49f9a-db-sync-config-data\") pod \"glance-db-sync-d9v7c\" (UID: \"a87a557a-be2d-477a-afe4-315cd7b49f9a\") " pod="openstack/glance-db-sync-d9v7c" Jan 07 03:49:57 crc kubenswrapper[4980]: I0107 03:49:57.019719 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzll2\" (UniqueName: \"kubernetes.io/projected/a87a557a-be2d-477a-afe4-315cd7b49f9a-kube-api-access-vzll2\") pod \"glance-db-sync-d9v7c\" (UID: \"a87a557a-be2d-477a-afe4-315cd7b49f9a\") " pod="openstack/glance-db-sync-d9v7c" Jan 07 03:49:57 crc kubenswrapper[4980]: I0107 03:49:57.107166 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d9v7c" Jan 07 03:49:57 crc kubenswrapper[4980]: I0107 03:49:57.449067 4980 generic.go:334] "Generic (PLEG): container finished" podID="f38553ab-2b7d-486b-9290-161ef0dd23b3" containerID="f688f3372fa4d8c9bff711a308dc77267795e3401909f2c964e1991f0fabd752" exitCode=0 Jan 07 03:49:57 crc kubenswrapper[4980]: I0107 03:49:57.449914 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pfwbk" event={"ID":"f38553ab-2b7d-486b-9290-161ef0dd23b3","Type":"ContainerDied","Data":"f688f3372fa4d8c9bff711a308dc77267795e3401909f2c964e1991f0fabd752"} Jan 07 03:49:57 crc kubenswrapper[4980]: I0107 03:49:57.713137 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-d9v7c"] Jan 07 03:49:57 crc kubenswrapper[4980]: W0107 03:49:57.722764 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda87a557a_be2d_477a_afe4_315cd7b49f9a.slice/crio-e141726abebbe6d4413caa1572135dc865f8f2c41b8d7d59274db3d01fccaa05 WatchSource:0}: Error finding container e141726abebbe6d4413caa1572135dc865f8f2c41b8d7d59274db3d01fccaa05: Status 404 returned error can't find the container with id e141726abebbe6d4413caa1572135dc865f8f2c41b8d7d59274db3d01fccaa05 Jan 07 03:49:58 crc kubenswrapper[4980]: I0107 03:49:58.457461 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d9v7c" event={"ID":"a87a557a-be2d-477a-afe4-315cd7b49f9a","Type":"ContainerStarted","Data":"e141726abebbe6d4413caa1572135dc865f8f2c41b8d7d59274db3d01fccaa05"} Jan 07 03:49:58 crc kubenswrapper[4980]: I0107 03:49:58.767910 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pfwbk" Jan 07 03:49:58 crc kubenswrapper[4980]: I0107 03:49:58.935955 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2tph\" (UniqueName: \"kubernetes.io/projected/f38553ab-2b7d-486b-9290-161ef0dd23b3-kube-api-access-v2tph\") pod \"f38553ab-2b7d-486b-9290-161ef0dd23b3\" (UID: \"f38553ab-2b7d-486b-9290-161ef0dd23b3\") " Jan 07 03:49:58 crc kubenswrapper[4980]: I0107 03:49:58.936416 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f38553ab-2b7d-486b-9290-161ef0dd23b3-operator-scripts\") pod \"f38553ab-2b7d-486b-9290-161ef0dd23b3\" (UID: \"f38553ab-2b7d-486b-9290-161ef0dd23b3\") " Jan 07 03:49:58 crc kubenswrapper[4980]: I0107 03:49:58.937787 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f38553ab-2b7d-486b-9290-161ef0dd23b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f38553ab-2b7d-486b-9290-161ef0dd23b3" (UID: "f38553ab-2b7d-486b-9290-161ef0dd23b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:49:58 crc kubenswrapper[4980]: I0107 03:49:58.945079 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f38553ab-2b7d-486b-9290-161ef0dd23b3-kube-api-access-v2tph" (OuterVolumeSpecName: "kube-api-access-v2tph") pod "f38553ab-2b7d-486b-9290-161ef0dd23b3" (UID: "f38553ab-2b7d-486b-9290-161ef0dd23b3"). InnerVolumeSpecName "kube-api-access-v2tph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:49:59 crc kubenswrapper[4980]: I0107 03:49:59.038916 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f38553ab-2b7d-486b-9290-161ef0dd23b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:59 crc kubenswrapper[4980]: I0107 03:49:59.038953 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2tph\" (UniqueName: \"kubernetes.io/projected/f38553ab-2b7d-486b-9290-161ef0dd23b3-kube-api-access-v2tph\") on node \"crc\" DevicePath \"\"" Jan 07 03:49:59 crc kubenswrapper[4980]: I0107 03:49:59.469266 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pfwbk" event={"ID":"f38553ab-2b7d-486b-9290-161ef0dd23b3","Type":"ContainerDied","Data":"123877c84bc86af958faf56a776450c80535cf8b83cc6791d4393212405d6ce0"} Jan 07 03:49:59 crc kubenswrapper[4980]: I0107 03:49:59.469312 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="123877c84bc86af958faf56a776450c80535cf8b83cc6791d4393212405d6ce0" Jan 07 03:49:59 crc kubenswrapper[4980]: I0107 03:49:59.469364 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pfwbk" Jan 07 03:50:01 crc kubenswrapper[4980]: I0107 03:50:01.494784 4980 generic.go:334] "Generic (PLEG): container finished" podID="a3566d37-de40-4834-9bbc-48dc6fe7e9c5" containerID="3e30a9bc16f5ef2b2b2c6ec61b08377131af2a3cd39b19da2f41abf6d5e929f9" exitCode=0 Jan 07 03:50:01 crc kubenswrapper[4980]: I0107 03:50:01.494918 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-89vfr" event={"ID":"a3566d37-de40-4834-9bbc-48dc6fe7e9c5","Type":"ContainerDied","Data":"3e30a9bc16f5ef2b2b2c6ec61b08377131af2a3cd39b19da2f41abf6d5e929f9"} Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.185450 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-94rwj" podUID="4567269f-c5aa-44a8-8e68-c0dc01c2b55c" containerName="ovn-controller" probeResult="failure" output=< Jan 07 03:50:02 crc kubenswrapper[4980]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 07 03:50:02 crc kubenswrapper[4980]: > Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.223674 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.227544 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4nfg5" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.461303 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-94rwj-config-tj7rw"] Jan 07 03:50:02 crc kubenswrapper[4980]: E0107 03:50:02.461894 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f38553ab-2b7d-486b-9290-161ef0dd23b3" containerName="mariadb-account-create-update" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.461907 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="f38553ab-2b7d-486b-9290-161ef0dd23b3" containerName="mariadb-account-create-update" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.462069 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="f38553ab-2b7d-486b-9290-161ef0dd23b3" containerName="mariadb-account-create-update" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.462536 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.465406 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.478698 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-94rwj-config-tj7rw"] Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.511364 4980 generic.go:334] "Generic (PLEG): container finished" podID="26440bb2-233e-47e3-bb46-9122523bce68" containerID="16a10cc63fe3737551a1d1c1d38097ad76d70f15551bca8e701414288ead90f6" exitCode=0 Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.511486 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"26440bb2-233e-47e3-bb46-9122523bce68","Type":"ContainerDied","Data":"16a10cc63fe3737551a1d1c1d38097ad76d70f15551bca8e701414288ead90f6"} Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.518311 4980 generic.go:334] "Generic (PLEG): container finished" podID="6714f510-9927-47da-bc8b-3e4a3995cdc6" containerID="ecbe432cc3de5ef2366a992d17d5ea48d3dc8f104563979f64f98a2a80743205" exitCode=0 Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.518510 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6714f510-9927-47da-bc8b-3e4a3995cdc6","Type":"ContainerDied","Data":"ecbe432cc3de5ef2366a992d17d5ea48d3dc8f104563979f64f98a2a80743205"} Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.604532 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/56e65aaf-dbf4-4764-aa49-93ddf1959607-var-run\") pod \"ovn-controller-94rwj-config-tj7rw\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.604660 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56e65aaf-dbf4-4764-aa49-93ddf1959607-scripts\") pod \"ovn-controller-94rwj-config-tj7rw\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.604742 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/56e65aaf-dbf4-4764-aa49-93ddf1959607-var-log-ovn\") pod \"ovn-controller-94rwj-config-tj7rw\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.604781 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/56e65aaf-dbf4-4764-aa49-93ddf1959607-var-run-ovn\") pod \"ovn-controller-94rwj-config-tj7rw\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.604805 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crkqt\" (UniqueName: \"kubernetes.io/projected/56e65aaf-dbf4-4764-aa49-93ddf1959607-kube-api-access-crkqt\") pod \"ovn-controller-94rwj-config-tj7rw\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.604890 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/56e65aaf-dbf4-4764-aa49-93ddf1959607-additional-scripts\") pod \"ovn-controller-94rwj-config-tj7rw\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.705844 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/56e65aaf-dbf4-4764-aa49-93ddf1959607-var-log-ovn\") pod \"ovn-controller-94rwj-config-tj7rw\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.706111 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/56e65aaf-dbf4-4764-aa49-93ddf1959607-var-run-ovn\") pod \"ovn-controller-94rwj-config-tj7rw\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.706129 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crkqt\" (UniqueName: \"kubernetes.io/projected/56e65aaf-dbf4-4764-aa49-93ddf1959607-kube-api-access-crkqt\") pod \"ovn-controller-94rwj-config-tj7rw\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.706231 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/56e65aaf-dbf4-4764-aa49-93ddf1959607-additional-scripts\") pod \"ovn-controller-94rwj-config-tj7rw\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.706249 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/56e65aaf-dbf4-4764-aa49-93ddf1959607-var-log-ovn\") pod \"ovn-controller-94rwj-config-tj7rw\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.706325 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/56e65aaf-dbf4-4764-aa49-93ddf1959607-var-run\") pod \"ovn-controller-94rwj-config-tj7rw\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.706419 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56e65aaf-dbf4-4764-aa49-93ddf1959607-scripts\") pod \"ovn-controller-94rwj-config-tj7rw\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.707194 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/56e65aaf-dbf4-4764-aa49-93ddf1959607-var-run-ovn\") pod \"ovn-controller-94rwj-config-tj7rw\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.707984 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/56e65aaf-dbf4-4764-aa49-93ddf1959607-var-run\") pod \"ovn-controller-94rwj-config-tj7rw\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.709128 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/56e65aaf-dbf4-4764-aa49-93ddf1959607-additional-scripts\") pod \"ovn-controller-94rwj-config-tj7rw\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.710279 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56e65aaf-dbf4-4764-aa49-93ddf1959607-scripts\") pod \"ovn-controller-94rwj-config-tj7rw\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.732794 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crkqt\" (UniqueName: \"kubernetes.io/projected/56e65aaf-dbf4-4764-aa49-93ddf1959607-kube-api-access-crkqt\") pod \"ovn-controller-94rwj-config-tj7rw\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.787779 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:02 crc kubenswrapper[4980]: I0107 03:50:02.842468 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.015045 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxlm6\" (UniqueName: \"kubernetes.io/projected/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-kube-api-access-dxlm6\") pod \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.015435 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-ring-data-devices\") pod \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.015507 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-dispersionconf\") pod \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.015625 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-combined-ca-bundle\") pod \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.015709 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-scripts\") pod \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.015781 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-swiftconf\") pod \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.015885 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-etc-swift\") pod \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\" (UID: \"a3566d37-de40-4834-9bbc-48dc6fe7e9c5\") " Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.016120 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a3566d37-de40-4834-9bbc-48dc6fe7e9c5" (UID: "a3566d37-de40-4834-9bbc-48dc6fe7e9c5"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.016536 4980 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.017207 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a3566d37-de40-4834-9bbc-48dc6fe7e9c5" (UID: "a3566d37-de40-4834-9bbc-48dc6fe7e9c5"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.024228 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-kube-api-access-dxlm6" (OuterVolumeSpecName: "kube-api-access-dxlm6") pod "a3566d37-de40-4834-9bbc-48dc6fe7e9c5" (UID: "a3566d37-de40-4834-9bbc-48dc6fe7e9c5"). InnerVolumeSpecName "kube-api-access-dxlm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.027805 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a3566d37-de40-4834-9bbc-48dc6fe7e9c5" (UID: "a3566d37-de40-4834-9bbc-48dc6fe7e9c5"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.047423 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3566d37-de40-4834-9bbc-48dc6fe7e9c5" (UID: "a3566d37-de40-4834-9bbc-48dc6fe7e9c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.053410 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-scripts" (OuterVolumeSpecName: "scripts") pod "a3566d37-de40-4834-9bbc-48dc6fe7e9c5" (UID: "a3566d37-de40-4834-9bbc-48dc6fe7e9c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.059308 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a3566d37-de40-4834-9bbc-48dc6fe7e9c5" (UID: "a3566d37-de40-4834-9bbc-48dc6fe7e9c5"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.079186 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-94rwj-config-tj7rw"] Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.119275 4980 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.119307 4980 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.119317 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxlm6\" (UniqueName: \"kubernetes.io/projected/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-kube-api-access-dxlm6\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.119328 4980 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.119392 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.119404 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3566d37-de40-4834-9bbc-48dc6fe7e9c5-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.531216 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-94rwj-config-tj7rw" event={"ID":"56e65aaf-dbf4-4764-aa49-93ddf1959607","Type":"ContainerStarted","Data":"e4e730d9f176a8a9e306c231483146f60cc1a8d163953cdc4fb8c3733f3cbde5"} Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.531527 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-94rwj-config-tj7rw" event={"ID":"56e65aaf-dbf4-4764-aa49-93ddf1959607","Type":"ContainerStarted","Data":"bc0b11a0796de39a933fbb4bd3d72fb3e5bfe6e5f49d6fc4545245c1863c4840"} Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.538287 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"26440bb2-233e-47e3-bb46-9122523bce68","Type":"ContainerStarted","Data":"882ee96934569ce83ce2b5c1e81a5acd141e7f0951d1e7057dc1a1740f34722b"} Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.538506 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.540601 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6714f510-9927-47da-bc8b-3e4a3995cdc6","Type":"ContainerStarted","Data":"5e36cba9c6cdfc2b8d1c15aa812ced9691350acef7022fe9c126099890c5a3a4"} Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.540774 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.557921 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-89vfr" event={"ID":"a3566d37-de40-4834-9bbc-48dc6fe7e9c5","Type":"ContainerDied","Data":"a269131b3d3f2659718db2627ac5065e4c7024005926fc03114dcde5029a7b49"} Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.557961 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a269131b3d3f2659718db2627ac5065e4c7024005926fc03114dcde5029a7b49" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.558018 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-89vfr" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.566461 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-94rwj-config-tj7rw" podStartSLOduration=1.564998629 podStartE2EDuration="1.564998629s" podCreationTimestamp="2026-01-07 03:50:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:50:03.555412209 +0000 UTC m=+1050.121106944" watchObservedRunningTime="2026-01-07 03:50:03.564998629 +0000 UTC m=+1050.130693364" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.587673 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=57.407081278 podStartE2EDuration="1m12.587658256s" podCreationTimestamp="2026-01-07 03:48:51 +0000 UTC" firstStartedPulling="2026-01-07 03:49:08.326282048 +0000 UTC m=+994.891976793" lastFinishedPulling="2026-01-07 03:49:23.506859026 +0000 UTC m=+1010.072553771" observedRunningTime="2026-01-07 03:50:03.585439126 +0000 UTC m=+1050.151133871" watchObservedRunningTime="2026-01-07 03:50:03.587658256 +0000 UTC m=+1050.153352991" Jan 07 03:50:03 crc kubenswrapper[4980]: I0107 03:50:03.618219 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=59.171758102 podStartE2EDuration="1m12.618200648s" podCreationTimestamp="2026-01-07 03:48:51 +0000 UTC" firstStartedPulling="2026-01-07 03:49:10.816100286 +0000 UTC m=+997.381795061" lastFinishedPulling="2026-01-07 03:49:24.262542862 +0000 UTC m=+1010.828237607" observedRunningTime="2026-01-07 03:50:03.611936423 +0000 UTC m=+1050.177631158" watchObservedRunningTime="2026-01-07 03:50:03.618200648 +0000 UTC m=+1050.183895383" Jan 07 03:50:04 crc kubenswrapper[4980]: I0107 03:50:04.570994 4980 generic.go:334] "Generic (PLEG): container finished" podID="56e65aaf-dbf4-4764-aa49-93ddf1959607" containerID="e4e730d9f176a8a9e306c231483146f60cc1a8d163953cdc4fb8c3733f3cbde5" exitCode=0 Jan 07 03:50:04 crc kubenswrapper[4980]: I0107 03:50:04.572747 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-94rwj-config-tj7rw" event={"ID":"56e65aaf-dbf4-4764-aa49-93ddf1959607","Type":"ContainerDied","Data":"e4e730d9f176a8a9e306c231483146f60cc1a8d163953cdc4fb8c3733f3cbde5"} Jan 07 03:50:06 crc kubenswrapper[4980]: I0107 03:50:06.908087 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 07 03:50:07 crc kubenswrapper[4980]: I0107 03:50:07.188870 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-94rwj" Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.206794 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.320131 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56e65aaf-dbf4-4764-aa49-93ddf1959607-scripts\") pod \"56e65aaf-dbf4-4764-aa49-93ddf1959607\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.320249 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/56e65aaf-dbf4-4764-aa49-93ddf1959607-additional-scripts\") pod \"56e65aaf-dbf4-4764-aa49-93ddf1959607\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.320292 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/56e65aaf-dbf4-4764-aa49-93ddf1959607-var-run\") pod \"56e65aaf-dbf4-4764-aa49-93ddf1959607\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.320345 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/56e65aaf-dbf4-4764-aa49-93ddf1959607-var-run-ovn\") pod \"56e65aaf-dbf4-4764-aa49-93ddf1959607\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.320491 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/56e65aaf-dbf4-4764-aa49-93ddf1959607-var-log-ovn\") pod \"56e65aaf-dbf4-4764-aa49-93ddf1959607\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.320599 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crkqt\" (UniqueName: \"kubernetes.io/projected/56e65aaf-dbf4-4764-aa49-93ddf1959607-kube-api-access-crkqt\") pod \"56e65aaf-dbf4-4764-aa49-93ddf1959607\" (UID: \"56e65aaf-dbf4-4764-aa49-93ddf1959607\") " Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.320739 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56e65aaf-dbf4-4764-aa49-93ddf1959607-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "56e65aaf-dbf4-4764-aa49-93ddf1959607" (UID: "56e65aaf-dbf4-4764-aa49-93ddf1959607"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.320832 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56e65aaf-dbf4-4764-aa49-93ddf1959607-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "56e65aaf-dbf4-4764-aa49-93ddf1959607" (UID: "56e65aaf-dbf4-4764-aa49-93ddf1959607"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.320869 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56e65aaf-dbf4-4764-aa49-93ddf1959607-var-run" (OuterVolumeSpecName: "var-run") pod "56e65aaf-dbf4-4764-aa49-93ddf1959607" (UID: "56e65aaf-dbf4-4764-aa49-93ddf1959607"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.321757 4980 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/56e65aaf-dbf4-4764-aa49-93ddf1959607-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.321796 4980 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/56e65aaf-dbf4-4764-aa49-93ddf1959607-var-run\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.321816 4980 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/56e65aaf-dbf4-4764-aa49-93ddf1959607-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.322591 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56e65aaf-dbf4-4764-aa49-93ddf1959607-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "56e65aaf-dbf4-4764-aa49-93ddf1959607" (UID: "56e65aaf-dbf4-4764-aa49-93ddf1959607"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.322953 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56e65aaf-dbf4-4764-aa49-93ddf1959607-scripts" (OuterVolumeSpecName: "scripts") pod "56e65aaf-dbf4-4764-aa49-93ddf1959607" (UID: "56e65aaf-dbf4-4764-aa49-93ddf1959607"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.325519 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56e65aaf-dbf4-4764-aa49-93ddf1959607-kube-api-access-crkqt" (OuterVolumeSpecName: "kube-api-access-crkqt") pod "56e65aaf-dbf4-4764-aa49-93ddf1959607" (UID: "56e65aaf-dbf4-4764-aa49-93ddf1959607"). InnerVolumeSpecName "kube-api-access-crkqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.423291 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crkqt\" (UniqueName: \"kubernetes.io/projected/56e65aaf-dbf4-4764-aa49-93ddf1959607-kube-api-access-crkqt\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.423337 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56e65aaf-dbf4-4764-aa49-93ddf1959607-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.423352 4980 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/56e65aaf-dbf4-4764-aa49-93ddf1959607-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.627605 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.634189 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6b5878bb-8928-4957-a27d-ce18da212460-etc-swift\") pod \"swift-storage-0\" (UID: \"6b5878bb-8928-4957-a27d-ce18da212460\") " pod="openstack/swift-storage-0" Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.641202 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-94rwj-config-tj7rw" event={"ID":"56e65aaf-dbf4-4764-aa49-93ddf1959607","Type":"ContainerDied","Data":"bc0b11a0796de39a933fbb4bd3d72fb3e5bfe6e5f49d6fc4545245c1863c4840"} Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.641253 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc0b11a0796de39a933fbb4bd3d72fb3e5bfe6e5f49d6fc4545245c1863c4840" Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.641280 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-94rwj-config-tj7rw" Jan 07 03:50:11 crc kubenswrapper[4980]: I0107 03:50:11.827259 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 07 03:50:12 crc kubenswrapper[4980]: I0107 03:50:12.326242 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-94rwj-config-tj7rw"] Jan 07 03:50:12 crc kubenswrapper[4980]: I0107 03:50:12.336543 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-94rwj-config-tj7rw"] Jan 07 03:50:12 crc kubenswrapper[4980]: I0107 03:50:12.412161 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 07 03:50:12 crc kubenswrapper[4980]: I0107 03:50:12.654613 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6b5878bb-8928-4957-a27d-ce18da212460","Type":"ContainerStarted","Data":"49df144b70a6e6a174023c8a630831faf1bdf8b04f3c57438256711f4c0ee248"} Jan 07 03:50:12 crc kubenswrapper[4980]: I0107 03:50:12.657292 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d9v7c" event={"ID":"a87a557a-be2d-477a-afe4-315cd7b49f9a","Type":"ContainerStarted","Data":"ecd0933c106a7c591ee289deb4f1a1677096a62a762cf0bb1336196203bf30cf"} Jan 07 03:50:12 crc kubenswrapper[4980]: I0107 03:50:12.697988 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-d9v7c" podStartSLOduration=3.151998866 podStartE2EDuration="16.69796337s" podCreationTimestamp="2026-01-07 03:49:56 +0000 UTC" firstStartedPulling="2026-01-07 03:49:57.725541218 +0000 UTC m=+1044.291235963" lastFinishedPulling="2026-01-07 03:50:11.271505722 +0000 UTC m=+1057.837200467" observedRunningTime="2026-01-07 03:50:12.67615307 +0000 UTC m=+1059.241847875" watchObservedRunningTime="2026-01-07 03:50:12.69796337 +0000 UTC m=+1059.263658125" Jan 07 03:50:12 crc kubenswrapper[4980]: I0107 03:50:12.866592 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.130720 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.427878 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-zd4fd"] Jan 07 03:50:13 crc kubenswrapper[4980]: E0107 03:50:13.428176 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3566d37-de40-4834-9bbc-48dc6fe7e9c5" containerName="swift-ring-rebalance" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.428188 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3566d37-de40-4834-9bbc-48dc6fe7e9c5" containerName="swift-ring-rebalance" Jan 07 03:50:13 crc kubenswrapper[4980]: E0107 03:50:13.428202 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e65aaf-dbf4-4764-aa49-93ddf1959607" containerName="ovn-config" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.428208 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e65aaf-dbf4-4764-aa49-93ddf1959607" containerName="ovn-config" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.428340 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e65aaf-dbf4-4764-aa49-93ddf1959607" containerName="ovn-config" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.428357 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3566d37-de40-4834-9bbc-48dc6fe7e9c5" containerName="swift-ring-rebalance" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.428816 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zd4fd" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.441918 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-32f7-account-create-update-5k5mk"] Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.442758 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-32f7-account-create-update-5k5mk" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.445785 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.462757 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgnw7\" (UniqueName: \"kubernetes.io/projected/7a530ce0-0dd8-46af-87ae-d2c63cb588f3-kube-api-access-bgnw7\") pod \"cinder-db-create-zd4fd\" (UID: \"7a530ce0-0dd8-46af-87ae-d2c63cb588f3\") " pod="openstack/cinder-db-create-zd4fd" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.462806 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a530ce0-0dd8-46af-87ae-d2c63cb588f3-operator-scripts\") pod \"cinder-db-create-zd4fd\" (UID: \"7a530ce0-0dd8-46af-87ae-d2c63cb588f3\") " pod="openstack/cinder-db-create-zd4fd" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.475305 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-32f7-account-create-update-5k5mk"] Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.540527 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-m58bl"] Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.545535 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m58bl" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.564208 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zd4fd"] Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.565675 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhxzs\" (UniqueName: \"kubernetes.io/projected/8f4907f9-9701-4aea-8ced-7a3f130bea66-kube-api-access-nhxzs\") pod \"barbican-32f7-account-create-update-5k5mk\" (UID: \"8f4907f9-9701-4aea-8ced-7a3f130bea66\") " pod="openstack/barbican-32f7-account-create-update-5k5mk" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.565717 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f4907f9-9701-4aea-8ced-7a3f130bea66-operator-scripts\") pod \"barbican-32f7-account-create-update-5k5mk\" (UID: \"8f4907f9-9701-4aea-8ced-7a3f130bea66\") " pod="openstack/barbican-32f7-account-create-update-5k5mk" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.565740 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gswsj\" (UniqueName: \"kubernetes.io/projected/d58f3e70-86ff-4d8b-8422-45ae65b067d6-kube-api-access-gswsj\") pod \"barbican-db-create-m58bl\" (UID: \"d58f3e70-86ff-4d8b-8422-45ae65b067d6\") " pod="openstack/barbican-db-create-m58bl" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.565767 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgnw7\" (UniqueName: \"kubernetes.io/projected/7a530ce0-0dd8-46af-87ae-d2c63cb588f3-kube-api-access-bgnw7\") pod \"cinder-db-create-zd4fd\" (UID: \"7a530ce0-0dd8-46af-87ae-d2c63cb588f3\") " pod="openstack/cinder-db-create-zd4fd" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.565798 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a530ce0-0dd8-46af-87ae-d2c63cb588f3-operator-scripts\") pod \"cinder-db-create-zd4fd\" (UID: \"7a530ce0-0dd8-46af-87ae-d2c63cb588f3\") " pod="openstack/cinder-db-create-zd4fd" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.565840 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d58f3e70-86ff-4d8b-8422-45ae65b067d6-operator-scripts\") pod \"barbican-db-create-m58bl\" (UID: \"d58f3e70-86ff-4d8b-8422-45ae65b067d6\") " pod="openstack/barbican-db-create-m58bl" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.566721 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a530ce0-0dd8-46af-87ae-d2c63cb588f3-operator-scripts\") pod \"cinder-db-create-zd4fd\" (UID: \"7a530ce0-0dd8-46af-87ae-d2c63cb588f3\") " pod="openstack/cinder-db-create-zd4fd" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.590623 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-m58bl"] Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.597270 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgnw7\" (UniqueName: \"kubernetes.io/projected/7a530ce0-0dd8-46af-87ae-d2c63cb588f3-kube-api-access-bgnw7\") pod \"cinder-db-create-zd4fd\" (UID: \"7a530ce0-0dd8-46af-87ae-d2c63cb588f3\") " pod="openstack/cinder-db-create-zd4fd" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.620148 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0d12-account-create-update-rlkh2"] Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.621372 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0d12-account-create-update-rlkh2" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.623834 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.628301 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0d12-account-create-update-rlkh2"] Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.667834 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gjwg\" (UniqueName: \"kubernetes.io/projected/a8aaabcf-42ed-4586-8d72-bb9e14a8d369-kube-api-access-8gjwg\") pod \"cinder-0d12-account-create-update-rlkh2\" (UID: \"a8aaabcf-42ed-4586-8d72-bb9e14a8d369\") " pod="openstack/cinder-0d12-account-create-update-rlkh2" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.667899 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhxzs\" (UniqueName: \"kubernetes.io/projected/8f4907f9-9701-4aea-8ced-7a3f130bea66-kube-api-access-nhxzs\") pod \"barbican-32f7-account-create-update-5k5mk\" (UID: \"8f4907f9-9701-4aea-8ced-7a3f130bea66\") " pod="openstack/barbican-32f7-account-create-update-5k5mk" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.667964 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f4907f9-9701-4aea-8ced-7a3f130bea66-operator-scripts\") pod \"barbican-32f7-account-create-update-5k5mk\" (UID: \"8f4907f9-9701-4aea-8ced-7a3f130bea66\") " pod="openstack/barbican-32f7-account-create-update-5k5mk" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.667995 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gswsj\" (UniqueName: \"kubernetes.io/projected/d58f3e70-86ff-4d8b-8422-45ae65b067d6-kube-api-access-gswsj\") pod \"barbican-db-create-m58bl\" (UID: \"d58f3e70-86ff-4d8b-8422-45ae65b067d6\") " pod="openstack/barbican-db-create-m58bl" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.668074 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d58f3e70-86ff-4d8b-8422-45ae65b067d6-operator-scripts\") pod \"barbican-db-create-m58bl\" (UID: \"d58f3e70-86ff-4d8b-8422-45ae65b067d6\") " pod="openstack/barbican-db-create-m58bl" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.668147 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8aaabcf-42ed-4586-8d72-bb9e14a8d369-operator-scripts\") pod \"cinder-0d12-account-create-update-rlkh2\" (UID: \"a8aaabcf-42ed-4586-8d72-bb9e14a8d369\") " pod="openstack/cinder-0d12-account-create-update-rlkh2" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.668686 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f4907f9-9701-4aea-8ced-7a3f130bea66-operator-scripts\") pod \"barbican-32f7-account-create-update-5k5mk\" (UID: \"8f4907f9-9701-4aea-8ced-7a3f130bea66\") " pod="openstack/barbican-32f7-account-create-update-5k5mk" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.669082 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d58f3e70-86ff-4d8b-8422-45ae65b067d6-operator-scripts\") pod \"barbican-db-create-m58bl\" (UID: \"d58f3e70-86ff-4d8b-8422-45ae65b067d6\") " pod="openstack/barbican-db-create-m58bl" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.691081 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhxzs\" (UniqueName: \"kubernetes.io/projected/8f4907f9-9701-4aea-8ced-7a3f130bea66-kube-api-access-nhxzs\") pod \"barbican-32f7-account-create-update-5k5mk\" (UID: \"8f4907f9-9701-4aea-8ced-7a3f130bea66\") " pod="openstack/barbican-32f7-account-create-update-5k5mk" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.711650 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gswsj\" (UniqueName: \"kubernetes.io/projected/d58f3e70-86ff-4d8b-8422-45ae65b067d6-kube-api-access-gswsj\") pod \"barbican-db-create-m58bl\" (UID: \"d58f3e70-86ff-4d8b-8422-45ae65b067d6\") " pod="openstack/barbican-db-create-m58bl" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.725278 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-hxl82"] Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.726517 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hxl82" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.730291 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.731089 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fsjw7" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.731252 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.731384 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.753221 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56e65aaf-dbf4-4764-aa49-93ddf1959607" path="/var/lib/kubelet/pods/56e65aaf-dbf4-4764-aa49-93ddf1959607/volumes" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.755102 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zd4fd" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.758887 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-hxl82"] Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.765252 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-32f7-account-create-update-5k5mk" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.770986 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f098f890-2064-479b-bd73-cf3269c4f3c2-config-data\") pod \"keystone-db-sync-hxl82\" (UID: \"f098f890-2064-479b-bd73-cf3269c4f3c2\") " pod="openstack/keystone-db-sync-hxl82" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.771164 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8aaabcf-42ed-4586-8d72-bb9e14a8d369-operator-scripts\") pod \"cinder-0d12-account-create-update-rlkh2\" (UID: \"a8aaabcf-42ed-4586-8d72-bb9e14a8d369\") " pod="openstack/cinder-0d12-account-create-update-rlkh2" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.771240 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gjwg\" (UniqueName: \"kubernetes.io/projected/a8aaabcf-42ed-4586-8d72-bb9e14a8d369-kube-api-access-8gjwg\") pod \"cinder-0d12-account-create-update-rlkh2\" (UID: \"a8aaabcf-42ed-4586-8d72-bb9e14a8d369\") " pod="openstack/cinder-0d12-account-create-update-rlkh2" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.771301 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f098f890-2064-479b-bd73-cf3269c4f3c2-combined-ca-bundle\") pod \"keystone-db-sync-hxl82\" (UID: \"f098f890-2064-479b-bd73-cf3269c4f3c2\") " pod="openstack/keystone-db-sync-hxl82" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.771447 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x865\" (UniqueName: \"kubernetes.io/projected/f098f890-2064-479b-bd73-cf3269c4f3c2-kube-api-access-5x865\") pod \"keystone-db-sync-hxl82\" (UID: \"f098f890-2064-479b-bd73-cf3269c4f3c2\") " pod="openstack/keystone-db-sync-hxl82" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.772831 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8aaabcf-42ed-4586-8d72-bb9e14a8d369-operator-scripts\") pod \"cinder-0d12-account-create-update-rlkh2\" (UID: \"a8aaabcf-42ed-4586-8d72-bb9e14a8d369\") " pod="openstack/cinder-0d12-account-create-update-rlkh2" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.795956 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gjwg\" (UniqueName: \"kubernetes.io/projected/a8aaabcf-42ed-4586-8d72-bb9e14a8d369-kube-api-access-8gjwg\") pod \"cinder-0d12-account-create-update-rlkh2\" (UID: \"a8aaabcf-42ed-4586-8d72-bb9e14a8d369\") " pod="openstack/cinder-0d12-account-create-update-rlkh2" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.828868 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-mp6vb"] Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.829904 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mp6vb" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.849655 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c79a-account-create-update-btmmk"] Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.850767 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c79a-account-create-update-btmmk" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.853547 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.860605 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mp6vb"] Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.872731 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f098f890-2064-479b-bd73-cf3269c4f3c2-config-data\") pod \"keystone-db-sync-hxl82\" (UID: \"f098f890-2064-479b-bd73-cf3269c4f3c2\") " pod="openstack/keystone-db-sync-hxl82" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.873309 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/540d5527-6e10-4850-b025-b0a8b62a8fc7-operator-scripts\") pod \"neutron-db-create-mp6vb\" (UID: \"540d5527-6e10-4850-b025-b0a8b62a8fc7\") " pod="openstack/neutron-db-create-mp6vb" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.873353 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwsht\" (UniqueName: \"kubernetes.io/projected/540d5527-6e10-4850-b025-b0a8b62a8fc7-kube-api-access-qwsht\") pod \"neutron-db-create-mp6vb\" (UID: \"540d5527-6e10-4850-b025-b0a8b62a8fc7\") " pod="openstack/neutron-db-create-mp6vb" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.873378 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqlct\" (UniqueName: \"kubernetes.io/projected/2a49206c-706d-4cbf-b4fb-9cebdb720837-kube-api-access-mqlct\") pod \"neutron-c79a-account-create-update-btmmk\" (UID: \"2a49206c-706d-4cbf-b4fb-9cebdb720837\") " pod="openstack/neutron-c79a-account-create-update-btmmk" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.873421 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a49206c-706d-4cbf-b4fb-9cebdb720837-operator-scripts\") pod \"neutron-c79a-account-create-update-btmmk\" (UID: \"2a49206c-706d-4cbf-b4fb-9cebdb720837\") " pod="openstack/neutron-c79a-account-create-update-btmmk" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.873444 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f098f890-2064-479b-bd73-cf3269c4f3c2-combined-ca-bundle\") pod \"keystone-db-sync-hxl82\" (UID: \"f098f890-2064-479b-bd73-cf3269c4f3c2\") " pod="openstack/keystone-db-sync-hxl82" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.873504 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x865\" (UniqueName: \"kubernetes.io/projected/f098f890-2064-479b-bd73-cf3269c4f3c2-kube-api-access-5x865\") pod \"keystone-db-sync-hxl82\" (UID: \"f098f890-2064-479b-bd73-cf3269c4f3c2\") " pod="openstack/keystone-db-sync-hxl82" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.873784 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c79a-account-create-update-btmmk"] Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.882201 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f098f890-2064-479b-bd73-cf3269c4f3c2-config-data\") pod \"keystone-db-sync-hxl82\" (UID: \"f098f890-2064-479b-bd73-cf3269c4f3c2\") " pod="openstack/keystone-db-sync-hxl82" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.885191 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f098f890-2064-479b-bd73-cf3269c4f3c2-combined-ca-bundle\") pod \"keystone-db-sync-hxl82\" (UID: \"f098f890-2064-479b-bd73-cf3269c4f3c2\") " pod="openstack/keystone-db-sync-hxl82" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.890229 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m58bl" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.893133 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x865\" (UniqueName: \"kubernetes.io/projected/f098f890-2064-479b-bd73-cf3269c4f3c2-kube-api-access-5x865\") pod \"keystone-db-sync-hxl82\" (UID: \"f098f890-2064-479b-bd73-cf3269c4f3c2\") " pod="openstack/keystone-db-sync-hxl82" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.946173 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0d12-account-create-update-rlkh2" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.974973 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/540d5527-6e10-4850-b025-b0a8b62a8fc7-operator-scripts\") pod \"neutron-db-create-mp6vb\" (UID: \"540d5527-6e10-4850-b025-b0a8b62a8fc7\") " pod="openstack/neutron-db-create-mp6vb" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.975019 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwsht\" (UniqueName: \"kubernetes.io/projected/540d5527-6e10-4850-b025-b0a8b62a8fc7-kube-api-access-qwsht\") pod \"neutron-db-create-mp6vb\" (UID: \"540d5527-6e10-4850-b025-b0a8b62a8fc7\") " pod="openstack/neutron-db-create-mp6vb" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.975045 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqlct\" (UniqueName: \"kubernetes.io/projected/2a49206c-706d-4cbf-b4fb-9cebdb720837-kube-api-access-mqlct\") pod \"neutron-c79a-account-create-update-btmmk\" (UID: \"2a49206c-706d-4cbf-b4fb-9cebdb720837\") " pod="openstack/neutron-c79a-account-create-update-btmmk" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.975091 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a49206c-706d-4cbf-b4fb-9cebdb720837-operator-scripts\") pod \"neutron-c79a-account-create-update-btmmk\" (UID: \"2a49206c-706d-4cbf-b4fb-9cebdb720837\") " pod="openstack/neutron-c79a-account-create-update-btmmk" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.975713 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/540d5527-6e10-4850-b025-b0a8b62a8fc7-operator-scripts\") pod \"neutron-db-create-mp6vb\" (UID: \"540d5527-6e10-4850-b025-b0a8b62a8fc7\") " pod="openstack/neutron-db-create-mp6vb" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.977155 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a49206c-706d-4cbf-b4fb-9cebdb720837-operator-scripts\") pod \"neutron-c79a-account-create-update-btmmk\" (UID: \"2a49206c-706d-4cbf-b4fb-9cebdb720837\") " pod="openstack/neutron-c79a-account-create-update-btmmk" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.993031 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwsht\" (UniqueName: \"kubernetes.io/projected/540d5527-6e10-4850-b025-b0a8b62a8fc7-kube-api-access-qwsht\") pod \"neutron-db-create-mp6vb\" (UID: \"540d5527-6e10-4850-b025-b0a8b62a8fc7\") " pod="openstack/neutron-db-create-mp6vb" Jan 07 03:50:13 crc kubenswrapper[4980]: I0107 03:50:13.994699 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqlct\" (UniqueName: \"kubernetes.io/projected/2a49206c-706d-4cbf-b4fb-9cebdb720837-kube-api-access-mqlct\") pod \"neutron-c79a-account-create-update-btmmk\" (UID: \"2a49206c-706d-4cbf-b4fb-9cebdb720837\") " pod="openstack/neutron-c79a-account-create-update-btmmk" Jan 07 03:50:14 crc kubenswrapper[4980]: I0107 03:50:14.065438 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hxl82" Jan 07 03:50:14 crc kubenswrapper[4980]: I0107 03:50:14.162238 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mp6vb" Jan 07 03:50:14 crc kubenswrapper[4980]: I0107 03:50:14.174662 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c79a-account-create-update-btmmk" Jan 07 03:50:14 crc kubenswrapper[4980]: I0107 03:50:14.343442 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zd4fd"] Jan 07 03:50:14 crc kubenswrapper[4980]: I0107 03:50:14.375228 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-32f7-account-create-update-5k5mk"] Jan 07 03:50:14 crc kubenswrapper[4980]: W0107 03:50:14.526884 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a530ce0_0dd8_46af_87ae_d2c63cb588f3.slice/crio-23116446af4517d9f949c0ce093d2b6122003dcb4c31cb1d507198209bf246e0 WatchSource:0}: Error finding container 23116446af4517d9f949c0ce093d2b6122003dcb4c31cb1d507198209bf246e0: Status 404 returned error can't find the container with id 23116446af4517d9f949c0ce093d2b6122003dcb4c31cb1d507198209bf246e0 Jan 07 03:50:14 crc kubenswrapper[4980]: W0107 03:50:14.529187 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f4907f9_9701_4aea_8ced_7a3f130bea66.slice/crio-a9c98088d9b3a3ea797f2a9a67207264644db0c035e8aebab1c5db73566e2fbf WatchSource:0}: Error finding container a9c98088d9b3a3ea797f2a9a67207264644db0c035e8aebab1c5db73566e2fbf: Status 404 returned error can't find the container with id a9c98088d9b3a3ea797f2a9a67207264644db0c035e8aebab1c5db73566e2fbf Jan 07 03:50:14 crc kubenswrapper[4980]: I0107 03:50:14.682284 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-32f7-account-create-update-5k5mk" event={"ID":"8f4907f9-9701-4aea-8ced-7a3f130bea66","Type":"ContainerStarted","Data":"a9c98088d9b3a3ea797f2a9a67207264644db0c035e8aebab1c5db73566e2fbf"} Jan 07 03:50:14 crc kubenswrapper[4980]: I0107 03:50:14.689408 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zd4fd" event={"ID":"7a530ce0-0dd8-46af-87ae-d2c63cb588f3","Type":"ContainerStarted","Data":"23116446af4517d9f949c0ce093d2b6122003dcb4c31cb1d507198209bf246e0"} Jan 07 03:50:14 crc kubenswrapper[4980]: I0107 03:50:14.898630 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-hxl82"] Jan 07 03:50:14 crc kubenswrapper[4980]: I0107 03:50:14.961874 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-m58bl"] Jan 07 03:50:14 crc kubenswrapper[4980]: W0107 03:50:14.983764 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd58f3e70_86ff_4d8b_8422_45ae65b067d6.slice/crio-4383ebb763201ff5f3aec0fe4be8567d8409ffdbdf237fb371510652af521c2c WatchSource:0}: Error finding container 4383ebb763201ff5f3aec0fe4be8567d8409ffdbdf237fb371510652af521c2c: Status 404 returned error can't find the container with id 4383ebb763201ff5f3aec0fe4be8567d8409ffdbdf237fb371510652af521c2c Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.234598 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0d12-account-create-update-rlkh2"] Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.241645 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c79a-account-create-update-btmmk"] Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.277241 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mp6vb"] Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.701529 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hxl82" event={"ID":"f098f890-2064-479b-bd73-cf3269c4f3c2","Type":"ContainerStarted","Data":"9d1db080c2b22db1375720422bc7450973dfcea63207760107e0e2b11caa60e6"} Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.703533 4980 generic.go:334] "Generic (PLEG): container finished" podID="8f4907f9-9701-4aea-8ced-7a3f130bea66" containerID="830aac1410ed7ed865c739dd074f0ac3fdd9e1fb4c9bbd628e9e8e631eef0301" exitCode=0 Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.703624 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-32f7-account-create-update-5k5mk" event={"ID":"8f4907f9-9701-4aea-8ced-7a3f130bea66","Type":"ContainerDied","Data":"830aac1410ed7ed865c739dd074f0ac3fdd9e1fb4c9bbd628e9e8e631eef0301"} Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.707204 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0d12-account-create-update-rlkh2" event={"ID":"a8aaabcf-42ed-4586-8d72-bb9e14a8d369","Type":"ContainerStarted","Data":"72117884f66dffe7458112b6bfaabb836f07dbc928f19afae06412ca930fac12"} Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.707252 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0d12-account-create-update-rlkh2" event={"ID":"a8aaabcf-42ed-4586-8d72-bb9e14a8d369","Type":"ContainerStarted","Data":"d9c85c82356bcdf91ab678e75e1b587fb7124f5a94da91a6fae6b30f7ef28654"} Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.711915 4980 generic.go:334] "Generic (PLEG): container finished" podID="d58f3e70-86ff-4d8b-8422-45ae65b067d6" containerID="cbdce93e00678741b45332c37336b592b0bfcd17656e3ccaca3e193adfdef1f0" exitCode=0 Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.712002 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m58bl" event={"ID":"d58f3e70-86ff-4d8b-8422-45ae65b067d6","Type":"ContainerDied","Data":"cbdce93e00678741b45332c37336b592b0bfcd17656e3ccaca3e193adfdef1f0"} Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.712053 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m58bl" event={"ID":"d58f3e70-86ff-4d8b-8422-45ae65b067d6","Type":"ContainerStarted","Data":"4383ebb763201ff5f3aec0fe4be8567d8409ffdbdf237fb371510652af521c2c"} Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.727546 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c79a-account-create-update-btmmk" event={"ID":"2a49206c-706d-4cbf-b4fb-9cebdb720837","Type":"ContainerStarted","Data":"04c50f021ba577bcc0bad4d94ec431b2f7d587bde330848f7fc2c3cbb911c2ed"} Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.727615 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c79a-account-create-update-btmmk" event={"ID":"2a49206c-706d-4cbf-b4fb-9cebdb720837","Type":"ContainerStarted","Data":"5f28a9e3ecf0070939b7ac5381ca734f5d6bc53a778f14dd190b73828caa0ee8"} Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.729974 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mp6vb" event={"ID":"540d5527-6e10-4850-b025-b0a8b62a8fc7","Type":"ContainerStarted","Data":"5d00658dc783f9a775da18be3f0a5e95d9480eb583fa5823ef1d921cc4335877"} Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.730008 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mp6vb" event={"ID":"540d5527-6e10-4850-b025-b0a8b62a8fc7","Type":"ContainerStarted","Data":"98cadc3eeafeba1057e1ebaf1e6a01c623187f412dbef7943d9df67e8a19023f"} Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.731685 4980 generic.go:334] "Generic (PLEG): container finished" podID="7a530ce0-0dd8-46af-87ae-d2c63cb588f3" containerID="b4867fd7ff6483d2ff4162e9b34ad8e0052d09b19835b8c65219f8203de8f007" exitCode=0 Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.731728 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zd4fd" event={"ID":"7a530ce0-0dd8-46af-87ae-d2c63cb588f3","Type":"ContainerDied","Data":"b4867fd7ff6483d2ff4162e9b34ad8e0052d09b19835b8c65219f8203de8f007"} Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.733625 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6b5878bb-8928-4957-a27d-ce18da212460","Type":"ContainerStarted","Data":"8d55bba8a3e9019d556c2cfca14b8d3cc55159d3339614e8921225621119309c"} Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.733648 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6b5878bb-8928-4957-a27d-ce18da212460","Type":"ContainerStarted","Data":"e5eac3557528e2065fc3d34123b23378ef52a49d7d403a824431bf585bf2093f"} Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.733657 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6b5878bb-8928-4957-a27d-ce18da212460","Type":"ContainerStarted","Data":"b72ac797fe80b6dd74b2835f34eb782433f978da97c57497460d98411e7cba23"} Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.733664 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6b5878bb-8928-4957-a27d-ce18da212460","Type":"ContainerStarted","Data":"0c79b15f5a1aea928b429f173eda7dfe1cddf1b5f366f310f1bb2ab71cd231cf"} Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.740474 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-0d12-account-create-update-rlkh2" podStartSLOduration=2.740444531 podStartE2EDuration="2.740444531s" podCreationTimestamp="2026-01-07 03:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:50:15.736088745 +0000 UTC m=+1062.301783480" watchObservedRunningTime="2026-01-07 03:50:15.740444531 +0000 UTC m=+1062.306139266" Jan 07 03:50:15 crc kubenswrapper[4980]: I0107 03:50:15.771835 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c79a-account-create-update-btmmk" podStartSLOduration=2.7718223699999998 podStartE2EDuration="2.77182237s" podCreationTimestamp="2026-01-07 03:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:50:15.770891521 +0000 UTC m=+1062.336586256" watchObservedRunningTime="2026-01-07 03:50:15.77182237 +0000 UTC m=+1062.337517105" Jan 07 03:50:16 crc kubenswrapper[4980]: I0107 03:50:16.746317 4980 generic.go:334] "Generic (PLEG): container finished" podID="a8aaabcf-42ed-4586-8d72-bb9e14a8d369" containerID="72117884f66dffe7458112b6bfaabb836f07dbc928f19afae06412ca930fac12" exitCode=0 Jan 07 03:50:16 crc kubenswrapper[4980]: I0107 03:50:16.746385 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0d12-account-create-update-rlkh2" event={"ID":"a8aaabcf-42ed-4586-8d72-bb9e14a8d369","Type":"ContainerDied","Data":"72117884f66dffe7458112b6bfaabb836f07dbc928f19afae06412ca930fac12"} Jan 07 03:50:16 crc kubenswrapper[4980]: I0107 03:50:16.761694 4980 generic.go:334] "Generic (PLEG): container finished" podID="2a49206c-706d-4cbf-b4fb-9cebdb720837" containerID="04c50f021ba577bcc0bad4d94ec431b2f7d587bde330848f7fc2c3cbb911c2ed" exitCode=0 Jan 07 03:50:16 crc kubenswrapper[4980]: I0107 03:50:16.762101 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c79a-account-create-update-btmmk" event={"ID":"2a49206c-706d-4cbf-b4fb-9cebdb720837","Type":"ContainerDied","Data":"04c50f021ba577bcc0bad4d94ec431b2f7d587bde330848f7fc2c3cbb911c2ed"} Jan 07 03:50:16 crc kubenswrapper[4980]: I0107 03:50:16.763237 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-mp6vb" podStartSLOduration=3.763220446 podStartE2EDuration="3.763220446s" podCreationTimestamp="2026-01-07 03:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:50:15.803980383 +0000 UTC m=+1062.369675118" watchObservedRunningTime="2026-01-07 03:50:16.763220446 +0000 UTC m=+1063.328915171" Jan 07 03:50:16 crc kubenswrapper[4980]: I0107 03:50:16.766370 4980 generic.go:334] "Generic (PLEG): container finished" podID="540d5527-6e10-4850-b025-b0a8b62a8fc7" containerID="5d00658dc783f9a775da18be3f0a5e95d9480eb583fa5823ef1d921cc4335877" exitCode=0 Jan 07 03:50:16 crc kubenswrapper[4980]: I0107 03:50:16.766504 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mp6vb" event={"ID":"540d5527-6e10-4850-b025-b0a8b62a8fc7","Type":"ContainerDied","Data":"5d00658dc783f9a775da18be3f0a5e95d9480eb583fa5823ef1d921cc4335877"} Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.212446 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zd4fd" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.224630 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-32f7-account-create-update-5k5mk" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.231988 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgnw7\" (UniqueName: \"kubernetes.io/projected/7a530ce0-0dd8-46af-87ae-d2c63cb588f3-kube-api-access-bgnw7\") pod \"7a530ce0-0dd8-46af-87ae-d2c63cb588f3\" (UID: \"7a530ce0-0dd8-46af-87ae-d2c63cb588f3\") " Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.232581 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a530ce0-0dd8-46af-87ae-d2c63cb588f3-operator-scripts\") pod \"7a530ce0-0dd8-46af-87ae-d2c63cb588f3\" (UID: \"7a530ce0-0dd8-46af-87ae-d2c63cb588f3\") " Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.232626 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f4907f9-9701-4aea-8ced-7a3f130bea66-operator-scripts\") pod \"8f4907f9-9701-4aea-8ced-7a3f130bea66\" (UID: \"8f4907f9-9701-4aea-8ced-7a3f130bea66\") " Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.232686 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhxzs\" (UniqueName: \"kubernetes.io/projected/8f4907f9-9701-4aea-8ced-7a3f130bea66-kube-api-access-nhxzs\") pod \"8f4907f9-9701-4aea-8ced-7a3f130bea66\" (UID: \"8f4907f9-9701-4aea-8ced-7a3f130bea66\") " Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.259880 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a530ce0-0dd8-46af-87ae-d2c63cb588f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a530ce0-0dd8-46af-87ae-d2c63cb588f3" (UID: "7a530ce0-0dd8-46af-87ae-d2c63cb588f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.261480 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f4907f9-9701-4aea-8ced-7a3f130bea66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f4907f9-9701-4aea-8ced-7a3f130bea66" (UID: "8f4907f9-9701-4aea-8ced-7a3f130bea66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.264062 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f4907f9-9701-4aea-8ced-7a3f130bea66-kube-api-access-nhxzs" (OuterVolumeSpecName: "kube-api-access-nhxzs") pod "8f4907f9-9701-4aea-8ced-7a3f130bea66" (UID: "8f4907f9-9701-4aea-8ced-7a3f130bea66"). InnerVolumeSpecName "kube-api-access-nhxzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.265713 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a530ce0-0dd8-46af-87ae-d2c63cb588f3-kube-api-access-bgnw7" (OuterVolumeSpecName: "kube-api-access-bgnw7") pod "7a530ce0-0dd8-46af-87ae-d2c63cb588f3" (UID: "7a530ce0-0dd8-46af-87ae-d2c63cb588f3"). InnerVolumeSpecName "kube-api-access-bgnw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.321773 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m58bl" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.336077 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d58f3e70-86ff-4d8b-8422-45ae65b067d6-operator-scripts\") pod \"d58f3e70-86ff-4d8b-8422-45ae65b067d6\" (UID: \"d58f3e70-86ff-4d8b-8422-45ae65b067d6\") " Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.336197 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gswsj\" (UniqueName: \"kubernetes.io/projected/d58f3e70-86ff-4d8b-8422-45ae65b067d6-kube-api-access-gswsj\") pod \"d58f3e70-86ff-4d8b-8422-45ae65b067d6\" (UID: \"d58f3e70-86ff-4d8b-8422-45ae65b067d6\") " Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.337000 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a530ce0-0dd8-46af-87ae-d2c63cb588f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.337032 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f4907f9-9701-4aea-8ced-7a3f130bea66-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.337051 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhxzs\" (UniqueName: \"kubernetes.io/projected/8f4907f9-9701-4aea-8ced-7a3f130bea66-kube-api-access-nhxzs\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.337069 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgnw7\" (UniqueName: \"kubernetes.io/projected/7a530ce0-0dd8-46af-87ae-d2c63cb588f3-kube-api-access-bgnw7\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.337145 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d58f3e70-86ff-4d8b-8422-45ae65b067d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d58f3e70-86ff-4d8b-8422-45ae65b067d6" (UID: "d58f3e70-86ff-4d8b-8422-45ae65b067d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.347239 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d58f3e70-86ff-4d8b-8422-45ae65b067d6-kube-api-access-gswsj" (OuterVolumeSpecName: "kube-api-access-gswsj") pod "d58f3e70-86ff-4d8b-8422-45ae65b067d6" (UID: "d58f3e70-86ff-4d8b-8422-45ae65b067d6"). InnerVolumeSpecName "kube-api-access-gswsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.438257 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d58f3e70-86ff-4d8b-8422-45ae65b067d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.438289 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gswsj\" (UniqueName: \"kubernetes.io/projected/d58f3e70-86ff-4d8b-8422-45ae65b067d6-kube-api-access-gswsj\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.787309 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m58bl" event={"ID":"d58f3e70-86ff-4d8b-8422-45ae65b067d6","Type":"ContainerDied","Data":"4383ebb763201ff5f3aec0fe4be8567d8409ffdbdf237fb371510652af521c2c"} Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.788336 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4383ebb763201ff5f3aec0fe4be8567d8409ffdbdf237fb371510652af521c2c" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.787346 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m58bl" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.791757 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zd4fd" event={"ID":"7a530ce0-0dd8-46af-87ae-d2c63cb588f3","Type":"ContainerDied","Data":"23116446af4517d9f949c0ce093d2b6122003dcb4c31cb1d507198209bf246e0"} Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.791800 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23116446af4517d9f949c0ce093d2b6122003dcb4c31cb1d507198209bf246e0" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.791874 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zd4fd" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.796875 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6b5878bb-8928-4957-a27d-ce18da212460","Type":"ContainerStarted","Data":"2911f0e6ed0ff63a6c22469bcfca531ee5e2a57010d1ffa9945cf14c91d113f5"} Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.798369 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-32f7-account-create-update-5k5mk" Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.798971 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-32f7-account-create-update-5k5mk" event={"ID":"8f4907f9-9701-4aea-8ced-7a3f130bea66","Type":"ContainerDied","Data":"a9c98088d9b3a3ea797f2a9a67207264644db0c035e8aebab1c5db73566e2fbf"} Jan 07 03:50:17 crc kubenswrapper[4980]: I0107 03:50:17.798989 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9c98088d9b3a3ea797f2a9a67207264644db0c035e8aebab1c5db73566e2fbf" Jan 07 03:50:19 crc kubenswrapper[4980]: I0107 03:50:19.816124 4980 generic.go:334] "Generic (PLEG): container finished" podID="a87a557a-be2d-477a-afe4-315cd7b49f9a" containerID="ecd0933c106a7c591ee289deb4f1a1677096a62a762cf0bb1336196203bf30cf" exitCode=0 Jan 07 03:50:19 crc kubenswrapper[4980]: I0107 03:50:19.816204 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d9v7c" event={"ID":"a87a557a-be2d-477a-afe4-315cd7b49f9a","Type":"ContainerDied","Data":"ecd0933c106a7c591ee289deb4f1a1677096a62a762cf0bb1336196203bf30cf"} Jan 07 03:50:19 crc kubenswrapper[4980]: I0107 03:50:19.976065 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0d12-account-create-update-rlkh2" Jan 07 03:50:19 crc kubenswrapper[4980]: I0107 03:50:19.980169 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8aaabcf-42ed-4586-8d72-bb9e14a8d369-operator-scripts\") pod \"a8aaabcf-42ed-4586-8d72-bb9e14a8d369\" (UID: \"a8aaabcf-42ed-4586-8d72-bb9e14a8d369\") " Jan 07 03:50:19 crc kubenswrapper[4980]: I0107 03:50:19.981625 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8aaabcf-42ed-4586-8d72-bb9e14a8d369-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a8aaabcf-42ed-4586-8d72-bb9e14a8d369" (UID: "a8aaabcf-42ed-4586-8d72-bb9e14a8d369"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:19 crc kubenswrapper[4980]: I0107 03:50:19.990773 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c79a-account-create-update-btmmk" Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.039330 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mp6vb" Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.084474 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gjwg\" (UniqueName: \"kubernetes.io/projected/a8aaabcf-42ed-4586-8d72-bb9e14a8d369-kube-api-access-8gjwg\") pod \"a8aaabcf-42ed-4586-8d72-bb9e14a8d369\" (UID: \"a8aaabcf-42ed-4586-8d72-bb9e14a8d369\") " Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.085078 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8aaabcf-42ed-4586-8d72-bb9e14a8d369-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.091537 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8aaabcf-42ed-4586-8d72-bb9e14a8d369-kube-api-access-8gjwg" (OuterVolumeSpecName: "kube-api-access-8gjwg") pod "a8aaabcf-42ed-4586-8d72-bb9e14a8d369" (UID: "a8aaabcf-42ed-4586-8d72-bb9e14a8d369"). InnerVolumeSpecName "kube-api-access-8gjwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.186296 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/540d5527-6e10-4850-b025-b0a8b62a8fc7-operator-scripts\") pod \"540d5527-6e10-4850-b025-b0a8b62a8fc7\" (UID: \"540d5527-6e10-4850-b025-b0a8b62a8fc7\") " Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.186435 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwsht\" (UniqueName: \"kubernetes.io/projected/540d5527-6e10-4850-b025-b0a8b62a8fc7-kube-api-access-qwsht\") pod \"540d5527-6e10-4850-b025-b0a8b62a8fc7\" (UID: \"540d5527-6e10-4850-b025-b0a8b62a8fc7\") " Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.186480 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqlct\" (UniqueName: \"kubernetes.io/projected/2a49206c-706d-4cbf-b4fb-9cebdb720837-kube-api-access-mqlct\") pod \"2a49206c-706d-4cbf-b4fb-9cebdb720837\" (UID: \"2a49206c-706d-4cbf-b4fb-9cebdb720837\") " Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.186514 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a49206c-706d-4cbf-b4fb-9cebdb720837-operator-scripts\") pod \"2a49206c-706d-4cbf-b4fb-9cebdb720837\" (UID: \"2a49206c-706d-4cbf-b4fb-9cebdb720837\") " Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.186994 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a49206c-706d-4cbf-b4fb-9cebdb720837-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a49206c-706d-4cbf-b4fb-9cebdb720837" (UID: "2a49206c-706d-4cbf-b4fb-9cebdb720837"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.187150 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/540d5527-6e10-4850-b025-b0a8b62a8fc7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "540d5527-6e10-4850-b025-b0a8b62a8fc7" (UID: "540d5527-6e10-4850-b025-b0a8b62a8fc7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.187701 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gjwg\" (UniqueName: \"kubernetes.io/projected/a8aaabcf-42ed-4586-8d72-bb9e14a8d369-kube-api-access-8gjwg\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.187734 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/540d5527-6e10-4850-b025-b0a8b62a8fc7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.187747 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a49206c-706d-4cbf-b4fb-9cebdb720837-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.190650 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/540d5527-6e10-4850-b025-b0a8b62a8fc7-kube-api-access-qwsht" (OuterVolumeSpecName: "kube-api-access-qwsht") pod "540d5527-6e10-4850-b025-b0a8b62a8fc7" (UID: "540d5527-6e10-4850-b025-b0a8b62a8fc7"). InnerVolumeSpecName "kube-api-access-qwsht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.192017 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a49206c-706d-4cbf-b4fb-9cebdb720837-kube-api-access-mqlct" (OuterVolumeSpecName: "kube-api-access-mqlct") pod "2a49206c-706d-4cbf-b4fb-9cebdb720837" (UID: "2a49206c-706d-4cbf-b4fb-9cebdb720837"). InnerVolumeSpecName "kube-api-access-mqlct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.290721 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwsht\" (UniqueName: \"kubernetes.io/projected/540d5527-6e10-4850-b025-b0a8b62a8fc7-kube-api-access-qwsht\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.290785 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqlct\" (UniqueName: \"kubernetes.io/projected/2a49206c-706d-4cbf-b4fb-9cebdb720837-kube-api-access-mqlct\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.857487 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mp6vb" event={"ID":"540d5527-6e10-4850-b025-b0a8b62a8fc7","Type":"ContainerDied","Data":"98cadc3eeafeba1057e1ebaf1e6a01c623187f412dbef7943d9df67e8a19023f"} Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.857853 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mp6vb" Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.857575 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98cadc3eeafeba1057e1ebaf1e6a01c623187f412dbef7943d9df67e8a19023f" Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.863852 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0d12-account-create-update-rlkh2" event={"ID":"a8aaabcf-42ed-4586-8d72-bb9e14a8d369","Type":"ContainerDied","Data":"d9c85c82356bcdf91ab678e75e1b587fb7124f5a94da91a6fae6b30f7ef28654"} Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.863899 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9c85c82356bcdf91ab678e75e1b587fb7124f5a94da91a6fae6b30f7ef28654" Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.864062 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0d12-account-create-update-rlkh2" Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.866447 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c79a-account-create-update-btmmk" Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.866677 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c79a-account-create-update-btmmk" event={"ID":"2a49206c-706d-4cbf-b4fb-9cebdb720837","Type":"ContainerDied","Data":"5f28a9e3ecf0070939b7ac5381ca734f5d6bc53a778f14dd190b73828caa0ee8"} Jan 07 03:50:20 crc kubenswrapper[4980]: I0107 03:50:20.866889 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f28a9e3ecf0070939b7ac5381ca734f5d6bc53a778f14dd190b73828caa0ee8" Jan 07 03:50:21 crc kubenswrapper[4980]: I0107 03:50:21.283504 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d9v7c" Jan 07 03:50:21 crc kubenswrapper[4980]: I0107 03:50:21.313051 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a87a557a-be2d-477a-afe4-315cd7b49f9a-config-data\") pod \"a87a557a-be2d-477a-afe4-315cd7b49f9a\" (UID: \"a87a557a-be2d-477a-afe4-315cd7b49f9a\") " Jan 07 03:50:21 crc kubenswrapper[4980]: I0107 03:50:21.313926 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzll2\" (UniqueName: \"kubernetes.io/projected/a87a557a-be2d-477a-afe4-315cd7b49f9a-kube-api-access-vzll2\") pod \"a87a557a-be2d-477a-afe4-315cd7b49f9a\" (UID: \"a87a557a-be2d-477a-afe4-315cd7b49f9a\") " Jan 07 03:50:21 crc kubenswrapper[4980]: I0107 03:50:21.313995 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a87a557a-be2d-477a-afe4-315cd7b49f9a-combined-ca-bundle\") pod \"a87a557a-be2d-477a-afe4-315cd7b49f9a\" (UID: \"a87a557a-be2d-477a-afe4-315cd7b49f9a\") " Jan 07 03:50:21 crc kubenswrapper[4980]: I0107 03:50:21.314043 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a87a557a-be2d-477a-afe4-315cd7b49f9a-db-sync-config-data\") pod \"a87a557a-be2d-477a-afe4-315cd7b49f9a\" (UID: \"a87a557a-be2d-477a-afe4-315cd7b49f9a\") " Jan 07 03:50:21 crc kubenswrapper[4980]: I0107 03:50:21.322018 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a87a557a-be2d-477a-afe4-315cd7b49f9a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a87a557a-be2d-477a-afe4-315cd7b49f9a" (UID: "a87a557a-be2d-477a-afe4-315cd7b49f9a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:21 crc kubenswrapper[4980]: I0107 03:50:21.322682 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a87a557a-be2d-477a-afe4-315cd7b49f9a-kube-api-access-vzll2" (OuterVolumeSpecName: "kube-api-access-vzll2") pod "a87a557a-be2d-477a-afe4-315cd7b49f9a" (UID: "a87a557a-be2d-477a-afe4-315cd7b49f9a"). InnerVolumeSpecName "kube-api-access-vzll2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:50:21 crc kubenswrapper[4980]: I0107 03:50:21.352343 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a87a557a-be2d-477a-afe4-315cd7b49f9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a87a557a-be2d-477a-afe4-315cd7b49f9a" (UID: "a87a557a-be2d-477a-afe4-315cd7b49f9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:21 crc kubenswrapper[4980]: I0107 03:50:21.383095 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a87a557a-be2d-477a-afe4-315cd7b49f9a-config-data" (OuterVolumeSpecName: "config-data") pod "a87a557a-be2d-477a-afe4-315cd7b49f9a" (UID: "a87a557a-be2d-477a-afe4-315cd7b49f9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:21 crc kubenswrapper[4980]: I0107 03:50:21.416367 4980 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a87a557a-be2d-477a-afe4-315cd7b49f9a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:21 crc kubenswrapper[4980]: I0107 03:50:21.416405 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a87a557a-be2d-477a-afe4-315cd7b49f9a-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:21 crc kubenswrapper[4980]: I0107 03:50:21.416416 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzll2\" (UniqueName: \"kubernetes.io/projected/a87a557a-be2d-477a-afe4-315cd7b49f9a-kube-api-access-vzll2\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:21 crc kubenswrapper[4980]: I0107 03:50:21.416428 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a87a557a-be2d-477a-afe4-315cd7b49f9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:21 crc kubenswrapper[4980]: I0107 03:50:21.888828 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d9v7c" event={"ID":"a87a557a-be2d-477a-afe4-315cd7b49f9a","Type":"ContainerDied","Data":"e141726abebbe6d4413caa1572135dc865f8f2c41b8d7d59274db3d01fccaa05"} Jan 07 03:50:21 crc kubenswrapper[4980]: I0107 03:50:21.889377 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e141726abebbe6d4413caa1572135dc865f8f2c41b8d7d59274db3d01fccaa05" Jan 07 03:50:21 crc kubenswrapper[4980]: I0107 03:50:21.889471 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d9v7c" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.274663 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pvdwd"] Jan 07 03:50:22 crc kubenswrapper[4980]: E0107 03:50:22.275886 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87a557a-be2d-477a-afe4-315cd7b49f9a" containerName="glance-db-sync" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.275964 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87a557a-be2d-477a-afe4-315cd7b49f9a" containerName="glance-db-sync" Jan 07 03:50:22 crc kubenswrapper[4980]: E0107 03:50:22.276023 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d58f3e70-86ff-4d8b-8422-45ae65b067d6" containerName="mariadb-database-create" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.276075 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="d58f3e70-86ff-4d8b-8422-45ae65b067d6" containerName="mariadb-database-create" Jan 07 03:50:22 crc kubenswrapper[4980]: E0107 03:50:22.276127 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a49206c-706d-4cbf-b4fb-9cebdb720837" containerName="mariadb-account-create-update" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.276177 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a49206c-706d-4cbf-b4fb-9cebdb720837" containerName="mariadb-account-create-update" Jan 07 03:50:22 crc kubenswrapper[4980]: E0107 03:50:22.276241 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="540d5527-6e10-4850-b025-b0a8b62a8fc7" containerName="mariadb-database-create" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.276292 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="540d5527-6e10-4850-b025-b0a8b62a8fc7" containerName="mariadb-database-create" Jan 07 03:50:22 crc kubenswrapper[4980]: E0107 03:50:22.276367 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a530ce0-0dd8-46af-87ae-d2c63cb588f3" containerName="mariadb-database-create" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.276419 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a530ce0-0dd8-46af-87ae-d2c63cb588f3" containerName="mariadb-database-create" Jan 07 03:50:22 crc kubenswrapper[4980]: E0107 03:50:22.276476 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f4907f9-9701-4aea-8ced-7a3f130bea66" containerName="mariadb-account-create-update" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.276525 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f4907f9-9701-4aea-8ced-7a3f130bea66" containerName="mariadb-account-create-update" Jan 07 03:50:22 crc kubenswrapper[4980]: E0107 03:50:22.276602 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8aaabcf-42ed-4586-8d72-bb9e14a8d369" containerName="mariadb-account-create-update" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.276656 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8aaabcf-42ed-4586-8d72-bb9e14a8d369" containerName="mariadb-account-create-update" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.276891 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="d58f3e70-86ff-4d8b-8422-45ae65b067d6" containerName="mariadb-database-create" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.276960 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8aaabcf-42ed-4586-8d72-bb9e14a8d369" containerName="mariadb-account-create-update" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.277013 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="540d5527-6e10-4850-b025-b0a8b62a8fc7" containerName="mariadb-database-create" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.277063 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f4907f9-9701-4aea-8ced-7a3f130bea66" containerName="mariadb-account-create-update" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.277131 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="a87a557a-be2d-477a-afe4-315cd7b49f9a" containerName="glance-db-sync" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.277194 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a49206c-706d-4cbf-b4fb-9cebdb720837" containerName="mariadb-account-create-update" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.277250 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a530ce0-0dd8-46af-87ae-d2c63cb588f3" containerName="mariadb-database-create" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.278121 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.298809 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pvdwd"] Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.343846 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-config\") pod \"dnsmasq-dns-5b946c75cc-pvdwd\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.343912 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-pvdwd\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.343950 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-pvdwd\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.344015 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-pvdwd\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.344303 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhs5n\" (UniqueName: \"kubernetes.io/projected/ec2e1228-6107-4659-91e9-96fb308753b6-kube-api-access-nhs5n\") pod \"dnsmasq-dns-5b946c75cc-pvdwd\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.446297 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-pvdwd\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.446387 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhs5n\" (UniqueName: \"kubernetes.io/projected/ec2e1228-6107-4659-91e9-96fb308753b6-kube-api-access-nhs5n\") pod \"dnsmasq-dns-5b946c75cc-pvdwd\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.446428 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-config\") pod \"dnsmasq-dns-5b946c75cc-pvdwd\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.446464 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-pvdwd\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.446498 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-pvdwd\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.447483 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-pvdwd\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.447766 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-pvdwd\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.447907 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-config\") pod \"dnsmasq-dns-5b946c75cc-pvdwd\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.447983 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-pvdwd\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.466775 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhs5n\" (UniqueName: \"kubernetes.io/projected/ec2e1228-6107-4659-91e9-96fb308753b6-kube-api-access-nhs5n\") pod \"dnsmasq-dns-5b946c75cc-pvdwd\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:22 crc kubenswrapper[4980]: I0107 03:50:22.595128 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:23 crc kubenswrapper[4980]: I0107 03:50:23.239958 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pvdwd"] Jan 07 03:50:23 crc kubenswrapper[4980]: W0107 03:50:23.251668 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec2e1228_6107_4659_91e9_96fb308753b6.slice/crio-e95fab62a07468783fa55428e12b5e30fd50fd753a55322542595a076f6cd2fb WatchSource:0}: Error finding container e95fab62a07468783fa55428e12b5e30fd50fd753a55322542595a076f6cd2fb: Status 404 returned error can't find the container with id e95fab62a07468783fa55428e12b5e30fd50fd753a55322542595a076f6cd2fb Jan 07 03:50:23 crc kubenswrapper[4980]: I0107 03:50:23.908954 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" event={"ID":"ec2e1228-6107-4659-91e9-96fb308753b6","Type":"ContainerStarted","Data":"e95fab62a07468783fa55428e12b5e30fd50fd753a55322542595a076f6cd2fb"} Jan 07 03:50:26 crc kubenswrapper[4980]: I0107 03:50:26.943831 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hxl82" event={"ID":"f098f890-2064-479b-bd73-cf3269c4f3c2","Type":"ContainerStarted","Data":"b8257186eb6cb11ccc7ddd36ad899460d58368d15d4884b277ca743a1a8f6fcc"} Jan 07 03:50:26 crc kubenswrapper[4980]: I0107 03:50:26.947720 4980 generic.go:334] "Generic (PLEG): container finished" podID="ec2e1228-6107-4659-91e9-96fb308753b6" containerID="679b7b188652d06ba86cf412e6f729a29086916c3cd9b6f0b1ea3626d777cef5" exitCode=0 Jan 07 03:50:26 crc kubenswrapper[4980]: I0107 03:50:26.947862 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" event={"ID":"ec2e1228-6107-4659-91e9-96fb308753b6","Type":"ContainerDied","Data":"679b7b188652d06ba86cf412e6f729a29086916c3cd9b6f0b1ea3626d777cef5"} Jan 07 03:50:26 crc kubenswrapper[4980]: I0107 03:50:26.965545 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-hxl82" podStartSLOduration=7.02798676 podStartE2EDuration="13.965531127s" podCreationTimestamp="2026-01-07 03:50:13 +0000 UTC" firstStartedPulling="2026-01-07 03:50:14.919708218 +0000 UTC m=+1061.485402953" lastFinishedPulling="2026-01-07 03:50:21.857252575 +0000 UTC m=+1068.422947320" observedRunningTime="2026-01-07 03:50:26.96242409 +0000 UTC m=+1073.528118825" watchObservedRunningTime="2026-01-07 03:50:26.965531127 +0000 UTC m=+1073.531225862" Jan 07 03:50:26 crc kubenswrapper[4980]: I0107 03:50:26.966076 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6b5878bb-8928-4957-a27d-ce18da212460","Type":"ContainerStarted","Data":"131d1a68265927c1091fb1221cac3f10ddb4b13b7160810b37b1e5c329225474"} Jan 07 03:50:26 crc kubenswrapper[4980]: I0107 03:50:26.966212 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6b5878bb-8928-4957-a27d-ce18da212460","Type":"ContainerStarted","Data":"41a580fb9adcea7fcbb0e592954e74a9d97e25f87925ba7b489b20ad9b753dd7"} Jan 07 03:50:27 crc kubenswrapper[4980]: I0107 03:50:27.984794 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6b5878bb-8928-4957-a27d-ce18da212460","Type":"ContainerStarted","Data":"d9cca7818b1549656d4c9ff9830fa1b989e8a9f743405fe108464ca8e97e1342"} Jan 07 03:50:27 crc kubenswrapper[4980]: I0107 03:50:27.987456 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" event={"ID":"ec2e1228-6107-4659-91e9-96fb308753b6","Type":"ContainerStarted","Data":"5a5515ee49e454288ffbbdbcca113da01911c3fbbee3547e8053671380969ca4"} Jan 07 03:50:27 crc kubenswrapper[4980]: I0107 03:50:27.988019 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:28 crc kubenswrapper[4980]: I0107 03:50:28.013442 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" podStartSLOduration=6.013428697 podStartE2EDuration="6.013428697s" podCreationTimestamp="2026-01-07 03:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:50:28.010993211 +0000 UTC m=+1074.576687996" watchObservedRunningTime="2026-01-07 03:50:28.013428697 +0000 UTC m=+1074.579123422" Jan 07 03:50:29 crc kubenswrapper[4980]: I0107 03:50:29.001609 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6b5878bb-8928-4957-a27d-ce18da212460","Type":"ContainerStarted","Data":"d366151960f753debebe6cf6b4ffa5c4dafbd479feb82a481964c1070bed934a"} Jan 07 03:50:29 crc kubenswrapper[4980]: I0107 03:50:29.001901 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6b5878bb-8928-4957-a27d-ce18da212460","Type":"ContainerStarted","Data":"c28aa0fb594b026aa4293e96c439fd595d9c2126f6dff4da412737674e92ee31"} Jan 07 03:50:29 crc kubenswrapper[4980]: I0107 03:50:29.001916 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6b5878bb-8928-4957-a27d-ce18da212460","Type":"ContainerStarted","Data":"4736db4e0e8ac7073134c4b7e38b2e5d0720ddb2dabb4dc8ceb315f988056fd1"} Jan 07 03:50:30 crc kubenswrapper[4980]: I0107 03:50:30.028067 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6b5878bb-8928-4957-a27d-ce18da212460","Type":"ContainerStarted","Data":"c3089a19681eb4d2feff6f63fe3c7e3c50faba3a01511589f1b33f71f8b33558"} Jan 07 03:50:30 crc kubenswrapper[4980]: I0107 03:50:30.032484 4980 generic.go:334] "Generic (PLEG): container finished" podID="f098f890-2064-479b-bd73-cf3269c4f3c2" containerID="b8257186eb6cb11ccc7ddd36ad899460d58368d15d4884b277ca743a1a8f6fcc" exitCode=0 Jan 07 03:50:30 crc kubenswrapper[4980]: I0107 03:50:30.032610 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hxl82" event={"ID":"f098f890-2064-479b-bd73-cf3269c4f3c2","Type":"ContainerDied","Data":"b8257186eb6cb11ccc7ddd36ad899460d58368d15d4884b277ca743a1a8f6fcc"} Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.046538 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6b5878bb-8928-4957-a27d-ce18da212460","Type":"ContainerStarted","Data":"9a313060d31d1494ae7628c625cc003dff80ea2c10f16812e72b6d640afc0d45"} Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.048164 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6b5878bb-8928-4957-a27d-ce18da212460","Type":"ContainerStarted","Data":"6f7585018544c2ebb70f5db221b2a2d5cc549704c95c4c408c8c1db17e69cc8e"} Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.048183 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6b5878bb-8928-4957-a27d-ce18da212460","Type":"ContainerStarted","Data":"fc1d7f84684c13d9e93665ed3a831188e2bb1ee4d09fc91a1fc1b26759e62170"} Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.118974 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.333082975 podStartE2EDuration="53.118944374s" podCreationTimestamp="2026-01-07 03:49:38 +0000 UTC" firstStartedPulling="2026-01-07 03:50:12.429088294 +0000 UTC m=+1058.994783029" lastFinishedPulling="2026-01-07 03:50:28.214949693 +0000 UTC m=+1074.780644428" observedRunningTime="2026-01-07 03:50:31.112015417 +0000 UTC m=+1077.677710162" watchObservedRunningTime="2026-01-07 03:50:31.118944374 +0000 UTC m=+1077.684639119" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.391180 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pvdwd"] Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.391834 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" podUID="ec2e1228-6107-4659-91e9-96fb308753b6" containerName="dnsmasq-dns" containerID="cri-o://5a5515ee49e454288ffbbdbcca113da01911c3fbbee3547e8053671380969ca4" gracePeriod=10 Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.425545 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-zf62k"] Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.428100 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.433759 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.460985 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-zf62k"] Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.490779 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hxl82" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.595353 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f098f890-2064-479b-bd73-cf3269c4f3c2-config-data\") pod \"f098f890-2064-479b-bd73-cf3269c4f3c2\" (UID: \"f098f890-2064-479b-bd73-cf3269c4f3c2\") " Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.595445 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f098f890-2064-479b-bd73-cf3269c4f3c2-combined-ca-bundle\") pod \"f098f890-2064-479b-bd73-cf3269c4f3c2\" (UID: \"f098f890-2064-479b-bd73-cf3269c4f3c2\") " Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.595500 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x865\" (UniqueName: \"kubernetes.io/projected/f098f890-2064-479b-bd73-cf3269c4f3c2-kube-api-access-5x865\") pod \"f098f890-2064-479b-bd73-cf3269c4f3c2\" (UID: \"f098f890-2064-479b-bd73-cf3269c4f3c2\") " Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.595959 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-zf62k\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.596031 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-zf62k\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.596084 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-zf62k\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.596125 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-config\") pod \"dnsmasq-dns-74f6bcbc87-zf62k\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.596157 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-zf62k\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.596293 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmgmg\" (UniqueName: \"kubernetes.io/projected/d8697de4-3469-4d64-867c-423ece890d43-kube-api-access-bmgmg\") pod \"dnsmasq-dns-74f6bcbc87-zf62k\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.602540 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f098f890-2064-479b-bd73-cf3269c4f3c2-kube-api-access-5x865" (OuterVolumeSpecName: "kube-api-access-5x865") pod "f098f890-2064-479b-bd73-cf3269c4f3c2" (UID: "f098f890-2064-479b-bd73-cf3269c4f3c2"). InnerVolumeSpecName "kube-api-access-5x865". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.632714 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f098f890-2064-479b-bd73-cf3269c4f3c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f098f890-2064-479b-bd73-cf3269c4f3c2" (UID: "f098f890-2064-479b-bd73-cf3269c4f3c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.644685 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f098f890-2064-479b-bd73-cf3269c4f3c2-config-data" (OuterVolumeSpecName: "config-data") pod "f098f890-2064-479b-bd73-cf3269c4f3c2" (UID: "f098f890-2064-479b-bd73-cf3269c4f3c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.698563 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmgmg\" (UniqueName: \"kubernetes.io/projected/d8697de4-3469-4d64-867c-423ece890d43-kube-api-access-bmgmg\") pod \"dnsmasq-dns-74f6bcbc87-zf62k\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.698665 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-zf62k\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.698699 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-zf62k\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.698721 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-zf62k\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.698756 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-config\") pod \"dnsmasq-dns-74f6bcbc87-zf62k\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.698774 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-zf62k\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.698890 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f098f890-2064-479b-bd73-cf3269c4f3c2-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.698919 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f098f890-2064-479b-bd73-cf3269c4f3c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.698930 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x865\" (UniqueName: \"kubernetes.io/projected/f098f890-2064-479b-bd73-cf3269c4f3c2-kube-api-access-5x865\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.699833 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-zf62k\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.700127 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-config\") pod \"dnsmasq-dns-74f6bcbc87-zf62k\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.700402 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-zf62k\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.700585 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-zf62k\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.702215 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-zf62k\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.721633 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmgmg\" (UniqueName: \"kubernetes.io/projected/d8697de4-3469-4d64-867c-423ece890d43-kube-api-access-bmgmg\") pod \"dnsmasq-dns-74f6bcbc87-zf62k\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.795405 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.841505 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.903577 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-config\") pod \"ec2e1228-6107-4659-91e9-96fb308753b6\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.903680 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-dns-svc\") pod \"ec2e1228-6107-4659-91e9-96fb308753b6\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.903801 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-ovsdbserver-sb\") pod \"ec2e1228-6107-4659-91e9-96fb308753b6\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.903819 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-ovsdbserver-nb\") pod \"ec2e1228-6107-4659-91e9-96fb308753b6\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.903862 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhs5n\" (UniqueName: \"kubernetes.io/projected/ec2e1228-6107-4659-91e9-96fb308753b6-kube-api-access-nhs5n\") pod \"ec2e1228-6107-4659-91e9-96fb308753b6\" (UID: \"ec2e1228-6107-4659-91e9-96fb308753b6\") " Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.911807 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec2e1228-6107-4659-91e9-96fb308753b6-kube-api-access-nhs5n" (OuterVolumeSpecName: "kube-api-access-nhs5n") pod "ec2e1228-6107-4659-91e9-96fb308753b6" (UID: "ec2e1228-6107-4659-91e9-96fb308753b6"). InnerVolumeSpecName "kube-api-access-nhs5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.965939 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec2e1228-6107-4659-91e9-96fb308753b6" (UID: "ec2e1228-6107-4659-91e9-96fb308753b6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.971137 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec2e1228-6107-4659-91e9-96fb308753b6" (UID: "ec2e1228-6107-4659-91e9-96fb308753b6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.973673 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-config" (OuterVolumeSpecName: "config") pod "ec2e1228-6107-4659-91e9-96fb308753b6" (UID: "ec2e1228-6107-4659-91e9-96fb308753b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:31 crc kubenswrapper[4980]: I0107 03:50:31.977374 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec2e1228-6107-4659-91e9-96fb308753b6" (UID: "ec2e1228-6107-4659-91e9-96fb308753b6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.006339 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.006365 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.006377 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhs5n\" (UniqueName: \"kubernetes.io/projected/ec2e1228-6107-4659-91e9-96fb308753b6-kube-api-access-nhs5n\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.006388 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.006396 4980 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2e1228-6107-4659-91e9-96fb308753b6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.056433 4980 generic.go:334] "Generic (PLEG): container finished" podID="ec2e1228-6107-4659-91e9-96fb308753b6" containerID="5a5515ee49e454288ffbbdbcca113da01911c3fbbee3547e8053671380969ca4" exitCode=0 Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.056485 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" event={"ID":"ec2e1228-6107-4659-91e9-96fb308753b6","Type":"ContainerDied","Data":"5a5515ee49e454288ffbbdbcca113da01911c3fbbee3547e8053671380969ca4"} Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.056542 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" event={"ID":"ec2e1228-6107-4659-91e9-96fb308753b6","Type":"ContainerDied","Data":"e95fab62a07468783fa55428e12b5e30fd50fd753a55322542595a076f6cd2fb"} Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.056577 4980 scope.go:117] "RemoveContainer" containerID="5a5515ee49e454288ffbbdbcca113da01911c3fbbee3547e8053671380969ca4" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.056600 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-pvdwd" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.058583 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hxl82" event={"ID":"f098f890-2064-479b-bd73-cf3269c4f3c2","Type":"ContainerDied","Data":"9d1db080c2b22db1375720422bc7450973dfcea63207760107e0e2b11caa60e6"} Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.058626 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hxl82" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.058634 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d1db080c2b22db1375720422bc7450973dfcea63207760107e0e2b11caa60e6" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.102517 4980 scope.go:117] "RemoveContainer" containerID="679b7b188652d06ba86cf412e6f729a29086916c3cd9b6f0b1ea3626d777cef5" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.119140 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pvdwd"] Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.126813 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pvdwd"] Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.132737 4980 scope.go:117] "RemoveContainer" containerID="5a5515ee49e454288ffbbdbcca113da01911c3fbbee3547e8053671380969ca4" Jan 07 03:50:32 crc kubenswrapper[4980]: E0107 03:50:32.138316 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a5515ee49e454288ffbbdbcca113da01911c3fbbee3547e8053671380969ca4\": container with ID starting with 5a5515ee49e454288ffbbdbcca113da01911c3fbbee3547e8053671380969ca4 not found: ID does not exist" containerID="5a5515ee49e454288ffbbdbcca113da01911c3fbbee3547e8053671380969ca4" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.138413 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a5515ee49e454288ffbbdbcca113da01911c3fbbee3547e8053671380969ca4"} err="failed to get container status \"5a5515ee49e454288ffbbdbcca113da01911c3fbbee3547e8053671380969ca4\": rpc error: code = NotFound desc = could not find container \"5a5515ee49e454288ffbbdbcca113da01911c3fbbee3547e8053671380969ca4\": container with ID starting with 5a5515ee49e454288ffbbdbcca113da01911c3fbbee3547e8053671380969ca4 not found: ID does not exist" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.138503 4980 scope.go:117] "RemoveContainer" containerID="679b7b188652d06ba86cf412e6f729a29086916c3cd9b6f0b1ea3626d777cef5" Jan 07 03:50:32 crc kubenswrapper[4980]: E0107 03:50:32.139024 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"679b7b188652d06ba86cf412e6f729a29086916c3cd9b6f0b1ea3626d777cef5\": container with ID starting with 679b7b188652d06ba86cf412e6f729a29086916c3cd9b6f0b1ea3626d777cef5 not found: ID does not exist" containerID="679b7b188652d06ba86cf412e6f729a29086916c3cd9b6f0b1ea3626d777cef5" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.139086 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"679b7b188652d06ba86cf412e6f729a29086916c3cd9b6f0b1ea3626d777cef5"} err="failed to get container status \"679b7b188652d06ba86cf412e6f729a29086916c3cd9b6f0b1ea3626d777cef5\": rpc error: code = NotFound desc = could not find container \"679b7b188652d06ba86cf412e6f729a29086916c3cd9b6f0b1ea3626d777cef5\": container with ID starting with 679b7b188652d06ba86cf412e6f729a29086916c3cd9b6f0b1ea3626d777cef5 not found: ID does not exist" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.290941 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-zf62k"] Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.327214 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-q7p9b"] Jan 07 03:50:32 crc kubenswrapper[4980]: E0107 03:50:32.327686 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f098f890-2064-479b-bd73-cf3269c4f3c2" containerName="keystone-db-sync" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.327705 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="f098f890-2064-479b-bd73-cf3269c4f3c2" containerName="keystone-db-sync" Jan 07 03:50:32 crc kubenswrapper[4980]: E0107 03:50:32.327723 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2e1228-6107-4659-91e9-96fb308753b6" containerName="dnsmasq-dns" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.327729 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2e1228-6107-4659-91e9-96fb308753b6" containerName="dnsmasq-dns" Jan 07 03:50:32 crc kubenswrapper[4980]: E0107 03:50:32.327740 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2e1228-6107-4659-91e9-96fb308753b6" containerName="init" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.327746 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2e1228-6107-4659-91e9-96fb308753b6" containerName="init" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.327904 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec2e1228-6107-4659-91e9-96fb308753b6" containerName="dnsmasq-dns" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.327923 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="f098f890-2064-479b-bd73-cf3269c4f3c2" containerName="keystone-db-sync" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.328525 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.335125 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.335372 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.335589 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.335718 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-zf62k"] Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.336082 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fsjw7" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.336367 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.351677 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-prx9r"] Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.355276 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.368180 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-q7p9b"] Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.375477 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-prx9r"] Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.521568 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-prx9r\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.521624 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-config\") pod \"dnsmasq-dns-847c4cc679-prx9r\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.521648 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-dns-svc\") pod \"dnsmasq-dns-847c4cc679-prx9r\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.521706 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srfw9\" (UniqueName: \"kubernetes.io/projected/2331397d-82df-4a69-80ff-9702b0fc66ce-kube-api-access-srfw9\") pod \"keystone-bootstrap-q7p9b\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.521732 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-prx9r\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.521789 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-config-data\") pod \"keystone-bootstrap-q7p9b\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.521990 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-combined-ca-bundle\") pod \"keystone-bootstrap-q7p9b\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.522037 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-fernet-keys\") pod \"keystone-bootstrap-q7p9b\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.522067 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-prx9r\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.522089 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-credential-keys\") pod \"keystone-bootstrap-q7p9b\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.522128 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-scripts\") pod \"keystone-bootstrap-q7p9b\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.522207 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwnjc\" (UniqueName: \"kubernetes.io/projected/1812a3e2-a8c1-476d-b0f0-44dac1e07899-kube-api-access-fwnjc\") pod \"dnsmasq-dns-847c4cc679-prx9r\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.606068 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7658bcf7b7-pdhtz"] Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.607277 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.613704 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-g5slk" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.618737 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.618860 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.619036 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.623416 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-combined-ca-bundle\") pod \"keystone-bootstrap-q7p9b\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.623454 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-fernet-keys\") pod \"keystone-bootstrap-q7p9b\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.623477 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-prx9r\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.623495 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-credential-keys\") pod \"keystone-bootstrap-q7p9b\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.623515 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-scripts\") pod \"keystone-bootstrap-q7p9b\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.623566 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwnjc\" (UniqueName: \"kubernetes.io/projected/1812a3e2-a8c1-476d-b0f0-44dac1e07899-kube-api-access-fwnjc\") pod \"dnsmasq-dns-847c4cc679-prx9r\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.623593 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-prx9r\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.623613 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-config\") pod \"dnsmasq-dns-847c4cc679-prx9r\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.623630 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-dns-svc\") pod \"dnsmasq-dns-847c4cc679-prx9r\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.623660 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srfw9\" (UniqueName: \"kubernetes.io/projected/2331397d-82df-4a69-80ff-9702b0fc66ce-kube-api-access-srfw9\") pod \"keystone-bootstrap-q7p9b\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.623686 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-prx9r\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.623705 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-config-data\") pod \"keystone-bootstrap-q7p9b\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.628347 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-dns-svc\") pod \"dnsmasq-dns-847c4cc679-prx9r\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.628912 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-prx9r\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.629420 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-config\") pod \"dnsmasq-dns-847c4cc679-prx9r\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.630302 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-prx9r\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.630755 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-prx9r\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.630956 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-config-data\") pod \"keystone-bootstrap-q7p9b\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.631489 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-combined-ca-bundle\") pod \"keystone-bootstrap-q7p9b\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.637827 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-credential-keys\") pod \"keystone-bootstrap-q7p9b\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.641248 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-fernet-keys\") pod \"keystone-bootstrap-q7p9b\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.644389 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-scripts\") pod \"keystone-bootstrap-q7p9b\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.726714 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02ac366d-3498-4985-b9d8-5d145f5c3048-config-data\") pod \"horizon-7658bcf7b7-pdhtz\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.727103 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99klz\" (UniqueName: \"kubernetes.io/projected/02ac366d-3498-4985-b9d8-5d145f5c3048-kube-api-access-99klz\") pod \"horizon-7658bcf7b7-pdhtz\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.727127 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02ac366d-3498-4985-b9d8-5d145f5c3048-logs\") pod \"horizon-7658bcf7b7-pdhtz\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.727155 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02ac366d-3498-4985-b9d8-5d145f5c3048-horizon-secret-key\") pod \"horizon-7658bcf7b7-pdhtz\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.727184 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02ac366d-3498-4985-b9d8-5d145f5c3048-scripts\") pod \"horizon-7658bcf7b7-pdhtz\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.730684 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srfw9\" (UniqueName: \"kubernetes.io/projected/2331397d-82df-4a69-80ff-9702b0fc66ce-kube-api-access-srfw9\") pod \"keystone-bootstrap-q7p9b\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.743302 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwnjc\" (UniqueName: \"kubernetes.io/projected/1812a3e2-a8c1-476d-b0f0-44dac1e07899-kube-api-access-fwnjc\") pod \"dnsmasq-dns-847c4cc679-prx9r\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.754637 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-b9flf"] Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.755934 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.770945 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fkl2k" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.771134 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.771240 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.787616 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7658bcf7b7-pdhtz"] Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.824855 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-b9flf"] Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.828265 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02ac366d-3498-4985-b9d8-5d145f5c3048-config-data\") pod \"horizon-7658bcf7b7-pdhtz\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.828385 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99klz\" (UniqueName: \"kubernetes.io/projected/02ac366d-3498-4985-b9d8-5d145f5c3048-kube-api-access-99klz\") pod \"horizon-7658bcf7b7-pdhtz\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.828408 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02ac366d-3498-4985-b9d8-5d145f5c3048-logs\") pod \"horizon-7658bcf7b7-pdhtz\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.828439 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02ac366d-3498-4985-b9d8-5d145f5c3048-horizon-secret-key\") pod \"horizon-7658bcf7b7-pdhtz\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.828471 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02ac366d-3498-4985-b9d8-5d145f5c3048-scripts\") pod \"horizon-7658bcf7b7-pdhtz\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.829092 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02ac366d-3498-4985-b9d8-5d145f5c3048-scripts\") pod \"horizon-7658bcf7b7-pdhtz\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.830230 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02ac366d-3498-4985-b9d8-5d145f5c3048-config-data\") pod \"horizon-7658bcf7b7-pdhtz\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.830784 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02ac366d-3498-4985-b9d8-5d145f5c3048-logs\") pod \"horizon-7658bcf7b7-pdhtz\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.847103 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02ac366d-3498-4985-b9d8-5d145f5c3048-horizon-secret-key\") pod \"horizon-7658bcf7b7-pdhtz\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.867768 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.869472 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.896743 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.897125 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.928946 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-9f6jz"] Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.929488 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-combined-ca-bundle\") pod \"cinder-db-sync-b9flf\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.929532 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-config-data\") pod \"cinder-db-sync-b9flf\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.929611 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-scripts\") pod \"cinder-db-sync-b9flf\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.929660 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mhc7\" (UniqueName: \"kubernetes.io/projected/2df19fa0-d6ce-4539-8f87-9d6935314e82-kube-api-access-2mhc7\") pod \"cinder-db-sync-b9flf\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.929703 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2df19fa0-d6ce-4539-8f87-9d6935314e82-etc-machine-id\") pod \"cinder-db-sync-b9flf\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.929722 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-db-sync-config-data\") pod \"cinder-db-sync-b9flf\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.930283 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9f6jz" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.948375 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99klz\" (UniqueName: \"kubernetes.io/projected/02ac366d-3498-4985-b9d8-5d145f5c3048-kube-api-access-99klz\") pod \"horizon-7658bcf7b7-pdhtz\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.974264 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mlwsp" Jan 07 03:50:32 crc kubenswrapper[4980]: I0107 03:50:32.974758 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.004744 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.007891 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.033523 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9f6jz"] Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.040135 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223a79c7-9460-4b16-a65d-90e2c0751dfa-run-httpd\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.040420 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-combined-ca-bundle\") pod \"cinder-db-sync-b9flf\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.040526 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-config-data\") pod \"cinder-db-sync-b9flf\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.040651 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ea978d21-b28c-4714-8f07-b70f84f0efa8-db-sync-config-data\") pod \"barbican-db-sync-9f6jz\" (UID: \"ea978d21-b28c-4714-8f07-b70f84f0efa8\") " pod="openstack/barbican-db-sync-9f6jz" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.040753 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-config-data\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.040909 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea978d21-b28c-4714-8f07-b70f84f0efa8-combined-ca-bundle\") pod \"barbican-db-sync-9f6jz\" (UID: \"ea978d21-b28c-4714-8f07-b70f84f0efa8\") " pod="openstack/barbican-db-sync-9f6jz" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.041007 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-scripts\") pod \"cinder-db-sync-b9flf\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.041164 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-scripts\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.041252 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mhc7\" (UniqueName: \"kubernetes.io/projected/2df19fa0-d6ce-4539-8f87-9d6935314e82-kube-api-access-2mhc7\") pod \"cinder-db-sync-b9flf\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.041351 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.041459 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223a79c7-9460-4b16-a65d-90e2c0751dfa-log-httpd\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.042132 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2df19fa0-d6ce-4539-8f87-9d6935314e82-etc-machine-id\") pod \"cinder-db-sync-b9flf\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.042231 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-db-sync-config-data\") pod \"cinder-db-sync-b9flf\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.042337 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w2ql\" (UniqueName: \"kubernetes.io/projected/223a79c7-9460-4b16-a65d-90e2c0751dfa-kube-api-access-8w2ql\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.047827 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.042292 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.042396 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2df19fa0-d6ce-4539-8f87-9d6935314e82-etc-machine-id\") pod \"cinder-db-sync-b9flf\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.048857 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtw4z\" (UniqueName: \"kubernetes.io/projected/ea978d21-b28c-4714-8f07-b70f84f0efa8-kube-api-access-dtw4z\") pod \"barbican-db-sync-9f6jz\" (UID: \"ea978d21-b28c-4714-8f07-b70f84f0efa8\") " pod="openstack/barbican-db-sync-9f6jz" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.052972 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-combined-ca-bundle\") pod \"cinder-db-sync-b9flf\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.061008 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-t6522"] Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.062357 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-t6522" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.063414 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-scripts\") pod \"cinder-db-sync-b9flf\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.064252 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-config-data\") pod \"cinder-db-sync-b9flf\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.084931 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-db-sync-config-data\") pod \"cinder-db-sync-b9flf\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.096052 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.096402 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-bd7cc" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.096519 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.112697 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f6fd86d65-8ldnw"] Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.115159 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.144144 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mhc7\" (UniqueName: \"kubernetes.io/projected/2df19fa0-d6ce-4539-8f87-9d6935314e82-kube-api-access-2mhc7\") pod \"cinder-db-sync-b9flf\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.152346 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea978d21-b28c-4714-8f07-b70f84f0efa8-combined-ca-bundle\") pod \"barbican-db-sync-9f6jz\" (UID: \"ea978d21-b28c-4714-8f07-b70f84f0efa8\") " pod="openstack/barbican-db-sync-9f6jz" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.152429 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-scripts\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.152470 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.152500 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223a79c7-9460-4b16-a65d-90e2c0751dfa-log-httpd\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.152536 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w2ql\" (UniqueName: \"kubernetes.io/projected/223a79c7-9460-4b16-a65d-90e2c0751dfa-kube-api-access-8w2ql\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.152588 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.152622 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtw4z\" (UniqueName: \"kubernetes.io/projected/ea978d21-b28c-4714-8f07-b70f84f0efa8-kube-api-access-dtw4z\") pod \"barbican-db-sync-9f6jz\" (UID: \"ea978d21-b28c-4714-8f07-b70f84f0efa8\") " pod="openstack/barbican-db-sync-9f6jz" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.152665 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223a79c7-9460-4b16-a65d-90e2c0751dfa-run-httpd\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.152718 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ea978d21-b28c-4714-8f07-b70f84f0efa8-db-sync-config-data\") pod \"barbican-db-sync-9f6jz\" (UID: \"ea978d21-b28c-4714-8f07-b70f84f0efa8\") " pod="openstack/barbican-db-sync-9f6jz" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.152749 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-config-data\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.156094 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223a79c7-9460-4b16-a65d-90e2c0751dfa-log-httpd\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.156362 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223a79c7-9460-4b16-a65d-90e2c0751dfa-run-httpd\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.168485 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-config-data\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.169787 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-scripts\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.170361 4980 generic.go:334] "Generic (PLEG): container finished" podID="d8697de4-3469-4d64-867c-423ece890d43" containerID="30c85ab9d9f232a3491f12183bd884f8d341a4bdef8f315443e2d4cfd2e9f6fa" exitCode=0 Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.170408 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" event={"ID":"d8697de4-3469-4d64-867c-423ece890d43","Type":"ContainerDied","Data":"30c85ab9d9f232a3491f12183bd884f8d341a4bdef8f315443e2d4cfd2e9f6fa"} Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.170440 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" event={"ID":"d8697de4-3469-4d64-867c-423ece890d43","Type":"ContainerStarted","Data":"50c67958a98a8f6cfc2b04afa2729bbc2270f884044b499473c5f14d4d106cdc"} Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.171159 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.180026 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.184847 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea978d21-b28c-4714-8f07-b70f84f0efa8-combined-ca-bundle\") pod \"barbican-db-sync-9f6jz\" (UID: \"ea978d21-b28c-4714-8f07-b70f84f0efa8\") " pod="openstack/barbican-db-sync-9f6jz" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.193219 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-t6522"] Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.194247 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ea978d21-b28c-4714-8f07-b70f84f0efa8-db-sync-config-data\") pod \"barbican-db-sync-9f6jz\" (UID: \"ea978d21-b28c-4714-8f07-b70f84f0efa8\") " pod="openstack/barbican-db-sync-9f6jz" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.221814 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w2ql\" (UniqueName: \"kubernetes.io/projected/223a79c7-9460-4b16-a65d-90e2c0751dfa-kube-api-access-8w2ql\") pod \"ceilometer-0\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.222433 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.226087 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtw4z\" (UniqueName: \"kubernetes.io/projected/ea978d21-b28c-4714-8f07-b70f84f0efa8-kube-api-access-dtw4z\") pod \"barbican-db-sync-9f6jz\" (UID: \"ea978d21-b28c-4714-8f07-b70f84f0efa8\") " pod="openstack/barbican-db-sync-9f6jz" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.227574 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.240474 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f6fd86d65-8ldnw"] Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.250223 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-bng92"] Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.251826 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bng92" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.254806 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d250b696-8adc-41af-8a3c-36c7a87a721f-config\") pod \"neutron-db-sync-t6522\" (UID: \"d250b696-8adc-41af-8a3c-36c7a87a721f\") " pod="openstack/neutron-db-sync-t6522" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.255015 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d74f1e6-2229-4ff7-8c80-b12d09285da4-horizon-secret-key\") pod \"horizon-6f6fd86d65-8ldnw\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.255268 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxhl2\" (UniqueName: \"kubernetes.io/projected/9d74f1e6-2229-4ff7-8c80-b12d09285da4-kube-api-access-wxhl2\") pod \"horizon-6f6fd86d65-8ldnw\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.255350 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d74f1e6-2229-4ff7-8c80-b12d09285da4-scripts\") pod \"horizon-6f6fd86d65-8ldnw\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.255511 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbpjb\" (UniqueName: \"kubernetes.io/projected/d250b696-8adc-41af-8a3c-36c7a87a721f-kube-api-access-xbpjb\") pod \"neutron-db-sync-t6522\" (UID: \"d250b696-8adc-41af-8a3c-36c7a87a721f\") " pod="openstack/neutron-db-sync-t6522" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.255625 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d74f1e6-2229-4ff7-8c80-b12d09285da4-config-data\") pod \"horizon-6f6fd86d65-8ldnw\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.255731 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d74f1e6-2229-4ff7-8c80-b12d09285da4-logs\") pod \"horizon-6f6fd86d65-8ldnw\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.255865 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d250b696-8adc-41af-8a3c-36c7a87a721f-combined-ca-bundle\") pod \"neutron-db-sync-t6522\" (UID: \"d250b696-8adc-41af-8a3c-36c7a87a721f\") " pod="openstack/neutron-db-sync-t6522" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.264426 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.264688 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-j9n2s" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.264857 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.325642 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bng92"] Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.334401 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.346060 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.357574 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.357811 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-d7gzb" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.357708 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.358152 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.358928 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d74f1e6-2229-4ff7-8c80-b12d09285da4-logs\") pod \"horizon-6f6fd86d65-8ldnw\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.358982 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d250b696-8adc-41af-8a3c-36c7a87a721f-combined-ca-bundle\") pod \"neutron-db-sync-t6522\" (UID: \"d250b696-8adc-41af-8a3c-36c7a87a721f\") " pod="openstack/neutron-db-sync-t6522" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.359010 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d250b696-8adc-41af-8a3c-36c7a87a721f-config\") pod \"neutron-db-sync-t6522\" (UID: \"d250b696-8adc-41af-8a3c-36c7a87a721f\") " pod="openstack/neutron-db-sync-t6522" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.359053 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d74f1e6-2229-4ff7-8c80-b12d09285da4-horizon-secret-key\") pod \"horizon-6f6fd86d65-8ldnw\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.359094 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwjmv\" (UniqueName: \"kubernetes.io/projected/bc47b5ba-b1a8-4615-be80-db3ae1580399-kube-api-access-gwjmv\") pod \"placement-db-sync-bng92\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " pod="openstack/placement-db-sync-bng92" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.359122 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc47b5ba-b1a8-4615-be80-db3ae1580399-combined-ca-bundle\") pod \"placement-db-sync-bng92\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " pod="openstack/placement-db-sync-bng92" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.359148 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc47b5ba-b1a8-4615-be80-db3ae1580399-config-data\") pod \"placement-db-sync-bng92\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " pod="openstack/placement-db-sync-bng92" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.359181 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc47b5ba-b1a8-4615-be80-db3ae1580399-logs\") pod \"placement-db-sync-bng92\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " pod="openstack/placement-db-sync-bng92" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.359201 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxhl2\" (UniqueName: \"kubernetes.io/projected/9d74f1e6-2229-4ff7-8c80-b12d09285da4-kube-api-access-wxhl2\") pod \"horizon-6f6fd86d65-8ldnw\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.359219 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d74f1e6-2229-4ff7-8c80-b12d09285da4-scripts\") pod \"horizon-6f6fd86d65-8ldnw\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.359268 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbpjb\" (UniqueName: \"kubernetes.io/projected/d250b696-8adc-41af-8a3c-36c7a87a721f-kube-api-access-xbpjb\") pod \"neutron-db-sync-t6522\" (UID: \"d250b696-8adc-41af-8a3c-36c7a87a721f\") " pod="openstack/neutron-db-sync-t6522" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.359290 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d74f1e6-2229-4ff7-8c80-b12d09285da4-config-data\") pod \"horizon-6f6fd86d65-8ldnw\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.359312 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc47b5ba-b1a8-4615-be80-db3ae1580399-scripts\") pod \"placement-db-sync-bng92\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " pod="openstack/placement-db-sync-bng92" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.359996 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d74f1e6-2229-4ff7-8c80-b12d09285da4-logs\") pod \"horizon-6f6fd86d65-8ldnw\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.363708 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d250b696-8adc-41af-8a3c-36c7a87a721f-combined-ca-bundle\") pod \"neutron-db-sync-t6522\" (UID: \"d250b696-8adc-41af-8a3c-36c7a87a721f\") " pod="openstack/neutron-db-sync-t6522" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.364925 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d74f1e6-2229-4ff7-8c80-b12d09285da4-scripts\") pod \"horizon-6f6fd86d65-8ldnw\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.365455 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d250b696-8adc-41af-8a3c-36c7a87a721f-config\") pod \"neutron-db-sync-t6522\" (UID: \"d250b696-8adc-41af-8a3c-36c7a87a721f\") " pod="openstack/neutron-db-sync-t6522" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.367710 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d74f1e6-2229-4ff7-8c80-b12d09285da4-config-data\") pod \"horizon-6f6fd86d65-8ldnw\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.369378 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d74f1e6-2229-4ff7-8c80-b12d09285da4-horizon-secret-key\") pod \"horizon-6f6fd86d65-8ldnw\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.397770 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbpjb\" (UniqueName: \"kubernetes.io/projected/d250b696-8adc-41af-8a3c-36c7a87a721f-kube-api-access-xbpjb\") pod \"neutron-db-sync-t6522\" (UID: \"d250b696-8adc-41af-8a3c-36c7a87a721f\") " pod="openstack/neutron-db-sync-t6522" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.398190 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9f6jz" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.403185 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.413086 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-prx9r"] Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.426534 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxhl2\" (UniqueName: \"kubernetes.io/projected/9d74f1e6-2229-4ff7-8c80-b12d09285da4-kube-api-access-wxhl2\") pod \"horizon-6f6fd86d65-8ldnw\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.438667 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b9flf" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.454628 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qrxm6"] Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.455975 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.457310 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-t6522" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.460906 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-config-data\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.460942 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.460968 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwjmv\" (UniqueName: \"kubernetes.io/projected/bc47b5ba-b1a8-4615-be80-db3ae1580399-kube-api-access-gwjmv\") pod \"placement-db-sync-bng92\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " pod="openstack/placement-db-sync-bng92" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.461034 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc47b5ba-b1a8-4615-be80-db3ae1580399-combined-ca-bundle\") pod \"placement-db-sync-bng92\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " pod="openstack/placement-db-sync-bng92" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.461683 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18a5ba18-b849-43f4-aa55-72853d477092-logs\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.461713 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc47b5ba-b1a8-4615-be80-db3ae1580399-config-data\") pod \"placement-db-sync-bng92\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " pod="openstack/placement-db-sync-bng92" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.461746 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc47b5ba-b1a8-4615-be80-db3ae1580399-logs\") pod \"placement-db-sync-bng92\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " pod="openstack/placement-db-sync-bng92" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.461773 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc47b5ba-b1a8-4615-be80-db3ae1580399-scripts\") pod \"placement-db-sync-bng92\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " pod="openstack/placement-db-sync-bng92" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.461821 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18a5ba18-b849-43f4-aa55-72853d477092-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.461842 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.461871 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cxrk\" (UniqueName: \"kubernetes.io/projected/18a5ba18-b849-43f4-aa55-72853d477092-kube-api-access-6cxrk\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.461890 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-scripts\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.461915 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.462727 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc47b5ba-b1a8-4615-be80-db3ae1580399-logs\") pod \"placement-db-sync-bng92\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " pod="openstack/placement-db-sync-bng92" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.463455 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qrxm6"] Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.475981 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc47b5ba-b1a8-4615-be80-db3ae1580399-combined-ca-bundle\") pod \"placement-db-sync-bng92\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " pod="openstack/placement-db-sync-bng92" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.477661 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc47b5ba-b1a8-4615-be80-db3ae1580399-scripts\") pod \"placement-db-sync-bng92\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " pod="openstack/placement-db-sync-bng92" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.490072 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.501258 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwjmv\" (UniqueName: \"kubernetes.io/projected/bc47b5ba-b1a8-4615-be80-db3ae1580399-kube-api-access-gwjmv\") pod \"placement-db-sync-bng92\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " pod="openstack/placement-db-sync-bng92" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.505788 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc47b5ba-b1a8-4615-be80-db3ae1580399-config-data\") pod \"placement-db-sync-bng92\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " pod="openstack/placement-db-sync-bng92" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.546121 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.548191 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.554544 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.554606 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.565047 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-config-data\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.565097 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.565139 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-config\") pod \"dnsmasq-dns-785d8bcb8c-qrxm6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.565157 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18a5ba18-b849-43f4-aa55-72853d477092-logs\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.565195 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-qrxm6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.565238 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-qrxm6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.565263 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-qrxm6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.565297 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18a5ba18-b849-43f4-aa55-72853d477092-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.565322 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.565341 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cxrk\" (UniqueName: \"kubernetes.io/projected/18a5ba18-b849-43f4-aa55-72853d477092-kube-api-access-6cxrk\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.565360 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-scripts\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.565381 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-qrxm6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.565400 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k28l9\" (UniqueName: \"kubernetes.io/projected/9dc4048d-d6ae-4100-b135-19e957d54eb6-kube-api-access-k28l9\") pod \"dnsmasq-dns-785d8bcb8c-qrxm6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.565425 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.567334 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18a5ba18-b849-43f4-aa55-72853d477092-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.567579 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18a5ba18-b849-43f4-aa55-72853d477092-logs\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.567850 4980 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.568219 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.596902 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-scripts\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.597277 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.598514 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cxrk\" (UniqueName: \"kubernetes.io/projected/18a5ba18-b849-43f4-aa55-72853d477092-kube-api-access-6cxrk\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.603901 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-config-data\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.606581 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.627327 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bng92" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.650775 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.666702 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.666787 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-qrxm6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.666812 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.666837 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k28l9\" (UniqueName: \"kubernetes.io/projected/9dc4048d-d6ae-4100-b135-19e957d54eb6-kube-api-access-k28l9\") pod \"dnsmasq-dns-785d8bcb8c-qrxm6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.666879 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.666914 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.666951 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mwrd\" (UniqueName: \"kubernetes.io/projected/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-kube-api-access-7mwrd\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.666996 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-config\") pod \"dnsmasq-dns-785d8bcb8c-qrxm6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.667039 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-logs\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.667069 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.667090 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-qrxm6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.667126 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.667159 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-qrxm6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.667194 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-qrxm6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.668162 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-qrxm6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.668669 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-qrxm6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.668709 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-config\") pod \"dnsmasq-dns-785d8bcb8c-qrxm6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.669260 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-qrxm6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.669763 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-qrxm6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.695615 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-d7gzb" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.718762 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.720383 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k28l9\" (UniqueName: \"kubernetes.io/projected/9dc4048d-d6ae-4100-b135-19e957d54eb6-kube-api-access-k28l9\") pod \"dnsmasq-dns-785d8bcb8c-qrxm6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.768925 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-logs\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.768981 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.769025 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.769083 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.769117 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.769147 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.769171 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.769203 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mwrd\" (UniqueName: \"kubernetes.io/projected/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-kube-api-access-7mwrd\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.769802 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-logs\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.770363 4980 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.772318 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.775508 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.776441 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.776747 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.786247 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.796309 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.806288 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mwrd\" (UniqueName: \"kubernetes.io/projected/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-kube-api-access-7mwrd\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.807947 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.818677 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.819331 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec2e1228-6107-4659-91e9-96fb308753b6" path="/var/lib/kubelet/pods/ec2e1228-6107-4659-91e9-96fb308753b6/volumes" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.857041 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.900898 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-q7p9b"] Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.906345 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 07 03:50:33 crc kubenswrapper[4980]: W0107 03:50:33.908914 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2331397d_82df_4a69_80ff_9702b0fc66ce.slice/crio-53c92743a92cbe99b0c8e44a40af1ba685c8c73c0f67092e9c76316072542514 WatchSource:0}: Error finding container 53c92743a92cbe99b0c8e44a40af1ba685c8c73c0f67092e9c76316072542514: Status 404 returned error can't find the container with id 53c92743a92cbe99b0c8e44a40af1ba685c8c73c0f67092e9c76316072542514 Jan 07 03:50:33 crc kubenswrapper[4980]: I0107 03:50:33.937755 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.100310 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.184128 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmgmg\" (UniqueName: \"kubernetes.io/projected/d8697de4-3469-4d64-867c-423ece890d43-kube-api-access-bmgmg\") pod \"d8697de4-3469-4d64-867c-423ece890d43\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.184227 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-ovsdbserver-sb\") pod \"d8697de4-3469-4d64-867c-423ece890d43\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.185144 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-config\") pod \"d8697de4-3469-4d64-867c-423ece890d43\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.185201 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-dns-swift-storage-0\") pod \"d8697de4-3469-4d64-867c-423ece890d43\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.185301 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-ovsdbserver-nb\") pod \"d8697de4-3469-4d64-867c-423ece890d43\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.185331 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-dns-svc\") pod \"d8697de4-3469-4d64-867c-423ece890d43\" (UID: \"d8697de4-3469-4d64-867c-423ece890d43\") " Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.199184 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8697de4-3469-4d64-867c-423ece890d43-kube-api-access-bmgmg" (OuterVolumeSpecName: "kube-api-access-bmgmg") pod "d8697de4-3469-4d64-867c-423ece890d43" (UID: "d8697de4-3469-4d64-867c-423ece890d43"). InnerVolumeSpecName "kube-api-access-bmgmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.221784 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-config" (OuterVolumeSpecName: "config") pod "d8697de4-3469-4d64-867c-423ece890d43" (UID: "d8697de4-3469-4d64-867c-423ece890d43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.224494 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q7p9b" event={"ID":"2331397d-82df-4a69-80ff-9702b0fc66ce","Type":"ContainerStarted","Data":"a2369d40f6bf481bb3301fab5e969e1e3bfbe5c0c42747c89f8d42fc32114e25"} Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.224603 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q7p9b" event={"ID":"2331397d-82df-4a69-80ff-9702b0fc66ce","Type":"ContainerStarted","Data":"53c92743a92cbe99b0c8e44a40af1ba685c8c73c0f67092e9c76316072542514"} Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.225406 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d8697de4-3469-4d64-867c-423ece890d43" (UID: "d8697de4-3469-4d64-867c-423ece890d43"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.228734 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" event={"ID":"d8697de4-3469-4d64-867c-423ece890d43","Type":"ContainerDied","Data":"50c67958a98a8f6cfc2b04afa2729bbc2270f884044b499473c5f14d4d106cdc"} Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.228777 4980 scope.go:117] "RemoveContainer" containerID="30c85ab9d9f232a3491f12183bd884f8d341a4bdef8f315443e2d4cfd2e9f6fa" Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.228883 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-zf62k" Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.234761 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d8697de4-3469-4d64-867c-423ece890d43" (UID: "d8697de4-3469-4d64-867c-423ece890d43"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.235568 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d8697de4-3469-4d64-867c-423ece890d43" (UID: "d8697de4-3469-4d64-867c-423ece890d43"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.247714 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-prx9r"] Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.258181 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-q7p9b" podStartSLOduration=2.25816552 podStartE2EDuration="2.25816552s" podCreationTimestamp="2026-01-07 03:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:50:34.240134328 +0000 UTC m=+1080.805829053" watchObservedRunningTime="2026-01-07 03:50:34.25816552 +0000 UTC m=+1080.823860255" Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.288872 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmgmg\" (UniqueName: \"kubernetes.io/projected/d8697de4-3469-4d64-867c-423ece890d43-kube-api-access-bmgmg\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.288902 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.288913 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.288922 4980 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.288933 4980 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.296147 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7658bcf7b7-pdhtz"] Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.305453 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d8697de4-3469-4d64-867c-423ece890d43" (UID: "d8697de4-3469-4d64-867c-423ece890d43"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.339814 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:50:34 crc kubenswrapper[4980]: W0107 03:50:34.344501 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod223a79c7_9460_4b16_a65d_90e2c0751dfa.slice/crio-4cc923c5cc1b21b84c006745109cc17d19e738ad0fcb791843e82f683df2797b WatchSource:0}: Error finding container 4cc923c5cc1b21b84c006745109cc17d19e738ad0fcb791843e82f683df2797b: Status 404 returned error can't find the container with id 4cc923c5cc1b21b84c006745109cc17d19e738ad0fcb791843e82f683df2797b Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.390503 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8697de4-3469-4d64-867c-423ece890d43-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.502701 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f6fd86d65-8ldnw"] Jan 07 03:50:34 crc kubenswrapper[4980]: W0107 03:50:34.510338 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d74f1e6_2229_4ff7_8c80_b12d09285da4.slice/crio-980c543c3acef5c2165ccf6a21093814295d80a7c1dcb9e72159c8e4bea6b851 WatchSource:0}: Error finding container 980c543c3acef5c2165ccf6a21093814295d80a7c1dcb9e72159c8e4bea6b851: Status 404 returned error can't find the container with id 980c543c3acef5c2165ccf6a21093814295d80a7c1dcb9e72159c8e4bea6b851 Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.659159 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-zf62k"] Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.665443 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-zf62k"] Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.672946 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9f6jz"] Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.709985 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-t6522"] Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.716233 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-b9flf"] Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.768636 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bng92"] Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.783601 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qrxm6"] Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.788475 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 07 03:50:34 crc kubenswrapper[4980]: I0107 03:50:34.866433 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 07 03:50:34 crc kubenswrapper[4980]: W0107 03:50:34.893432 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded558a32_6e6b_4c7e_85cb_bb4b33739e26.slice/crio-d3434693f460df2d06a7c61e4f8eb6f5d534c55c00e7800bcde8650d95a21dc0 WatchSource:0}: Error finding container d3434693f460df2d06a7c61e4f8eb6f5d534c55c00e7800bcde8650d95a21dc0: Status 404 returned error can't find the container with id d3434693f460df2d06a7c61e4f8eb6f5d534c55c00e7800bcde8650d95a21dc0 Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.241439 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ed558a32-6e6b-4c7e-85cb-bb4b33739e26","Type":"ContainerStarted","Data":"d3434693f460df2d06a7c61e4f8eb6f5d534c55c00e7800bcde8650d95a21dc0"} Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.243807 4980 generic.go:334] "Generic (PLEG): container finished" podID="9dc4048d-d6ae-4100-b135-19e957d54eb6" containerID="63784fa003b918f9cce095f896e230c7a044684ef1f6ce3e0238d37e15987cdf" exitCode=0 Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.243852 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" event={"ID":"9dc4048d-d6ae-4100-b135-19e957d54eb6","Type":"ContainerDied","Data":"63784fa003b918f9cce095f896e230c7a044684ef1f6ce3e0238d37e15987cdf"} Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.243867 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" event={"ID":"9dc4048d-d6ae-4100-b135-19e957d54eb6","Type":"ContainerStarted","Data":"9e5f2199761f48e63eba80618ae06ffaa89b7e385b8232d479d73285c967e039"} Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.248321 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b9flf" event={"ID":"2df19fa0-d6ce-4539-8f87-9d6935314e82","Type":"ContainerStarted","Data":"d0f234bb728557e8a9ca35e821880d36ed252e171b1dc07bd2f8d0a6d4eb3244"} Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.249893 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bng92" event={"ID":"bc47b5ba-b1a8-4615-be80-db3ae1580399","Type":"ContainerStarted","Data":"95139a78aeceadaa609ab1e1e56221735d09977cc83050ffd3b25146056bdabd"} Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.251329 4980 generic.go:334] "Generic (PLEG): container finished" podID="1812a3e2-a8c1-476d-b0f0-44dac1e07899" containerID="9fd8ce2eff1b34c826dcdb09127f219ca36b7cd8cf631291d72289c58980af8c" exitCode=0 Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.251370 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-prx9r" event={"ID":"1812a3e2-a8c1-476d-b0f0-44dac1e07899","Type":"ContainerDied","Data":"9fd8ce2eff1b34c826dcdb09127f219ca36b7cd8cf631291d72289c58980af8c"} Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.251386 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-prx9r" event={"ID":"1812a3e2-a8c1-476d-b0f0-44dac1e07899","Type":"ContainerStarted","Data":"758dd0c801163ee3257ac5d03974cb5682d4e44d2451a5822b6febeadf93c8d7"} Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.258708 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"18a5ba18-b849-43f4-aa55-72853d477092","Type":"ContainerStarted","Data":"90e4d043c38bec4bb68d0248aea81852ade668a854976723360b7b59c5123c87"} Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.272835 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"223a79c7-9460-4b16-a65d-90e2c0751dfa","Type":"ContainerStarted","Data":"4cc923c5cc1b21b84c006745109cc17d19e738ad0fcb791843e82f683df2797b"} Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.276071 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7658bcf7b7-pdhtz" event={"ID":"02ac366d-3498-4985-b9d8-5d145f5c3048","Type":"ContainerStarted","Data":"ae764c7cb94c810f41491bac3dd616ae17de0553a9329285ec26df903913e708"} Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.311861 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6fd86d65-8ldnw" event={"ID":"9d74f1e6-2229-4ff7-8c80-b12d09285da4","Type":"ContainerStarted","Data":"980c543c3acef5c2165ccf6a21093814295d80a7c1dcb9e72159c8e4bea6b851"} Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.317093 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-t6522" event={"ID":"d250b696-8adc-41af-8a3c-36c7a87a721f","Type":"ContainerStarted","Data":"b247233d37522498e864a29b26636f8e7437eae0bd7181c80835032b566449b3"} Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.317129 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-t6522" event={"ID":"d250b696-8adc-41af-8a3c-36c7a87a721f","Type":"ContainerStarted","Data":"a2ec6070b20d8becdd899df363aec8d7d8c9d14ddefdfa269c53c5d01b6a7a75"} Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.339241 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-t6522" podStartSLOduration=3.339226944 podStartE2EDuration="3.339226944s" podCreationTimestamp="2026-01-07 03:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:50:35.337912124 +0000 UTC m=+1081.903606859" watchObservedRunningTime="2026-01-07 03:50:35.339226944 +0000 UTC m=+1081.904921669" Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.355994 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9f6jz" event={"ID":"ea978d21-b28c-4714-8f07-b70f84f0efa8","Type":"ContainerStarted","Data":"db977c9914a79697b1ce767bea82246f644d8ab1f8e1bfd1e2d05d96db0022b9"} Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.633435 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.734839 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-ovsdbserver-nb\") pod \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.734886 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-ovsdbserver-sb\") pod \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.734965 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwnjc\" (UniqueName: \"kubernetes.io/projected/1812a3e2-a8c1-476d-b0f0-44dac1e07899-kube-api-access-fwnjc\") pod \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.734990 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-dns-svc\") pod \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.735134 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-config\") pod \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.735219 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-dns-swift-storage-0\") pod \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\" (UID: \"1812a3e2-a8c1-476d-b0f0-44dac1e07899\") " Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.739697 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1812a3e2-a8c1-476d-b0f0-44dac1e07899-kube-api-access-fwnjc" (OuterVolumeSpecName: "kube-api-access-fwnjc") pod "1812a3e2-a8c1-476d-b0f0-44dac1e07899" (UID: "1812a3e2-a8c1-476d-b0f0-44dac1e07899"). InnerVolumeSpecName "kube-api-access-fwnjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.768230 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1812a3e2-a8c1-476d-b0f0-44dac1e07899" (UID: "1812a3e2-a8c1-476d-b0f0-44dac1e07899"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.768250 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1812a3e2-a8c1-476d-b0f0-44dac1e07899" (UID: "1812a3e2-a8c1-476d-b0f0-44dac1e07899"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.777735 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8697de4-3469-4d64-867c-423ece890d43" path="/var/lib/kubelet/pods/d8697de4-3469-4d64-867c-423ece890d43/volumes" Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.792086 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-config" (OuterVolumeSpecName: "config") pod "1812a3e2-a8c1-476d-b0f0-44dac1e07899" (UID: "1812a3e2-a8c1-476d-b0f0-44dac1e07899"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.800778 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1812a3e2-a8c1-476d-b0f0-44dac1e07899" (UID: "1812a3e2-a8c1-476d-b0f0-44dac1e07899"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.824970 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1812a3e2-a8c1-476d-b0f0-44dac1e07899" (UID: "1812a3e2-a8c1-476d-b0f0-44dac1e07899"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.837435 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwnjc\" (UniqueName: \"kubernetes.io/projected/1812a3e2-a8c1-476d-b0f0-44dac1e07899-kube-api-access-fwnjc\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.837510 4980 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.837521 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.837531 4980 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.837542 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:35 crc kubenswrapper[4980]: I0107 03:50:35.837565 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1812a3e2-a8c1-476d-b0f0-44dac1e07899-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.256081 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.298052 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f6fd86d65-8ldnw"] Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.330351 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.339110 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f6b9d9cdf-2t4tv"] Jan 07 03:50:36 crc kubenswrapper[4980]: E0107 03:50:36.339461 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1812a3e2-a8c1-476d-b0f0-44dac1e07899" containerName="init" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.339476 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="1812a3e2-a8c1-476d-b0f0-44dac1e07899" containerName="init" Jan 07 03:50:36 crc kubenswrapper[4980]: E0107 03:50:36.339515 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8697de4-3469-4d64-867c-423ece890d43" containerName="init" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.339522 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8697de4-3469-4d64-867c-423ece890d43" containerName="init" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.339693 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8697de4-3469-4d64-867c-423ece890d43" containerName="init" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.339711 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="1812a3e2-a8c1-476d-b0f0-44dac1e07899" containerName="init" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.350783 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.358257 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f6b9d9cdf-2t4tv"] Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.370057 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.401387 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"18a5ba18-b849-43f4-aa55-72853d477092","Type":"ContainerStarted","Data":"b33d41de48a968d7f84207c3caa603878a416b959409771c1c50773b2c1bfb15"} Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.410762 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" event={"ID":"9dc4048d-d6ae-4100-b135-19e957d54eb6","Type":"ContainerStarted","Data":"d0363be642b9f8eb8a5626a9c1a6f14b3c7d6b196431e1e63297810b4ab2620e"} Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.411206 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.414835 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ed558a32-6e6b-4c7e-85cb-bb4b33739e26","Type":"ContainerStarted","Data":"fc2106ea7ab31638ff93479e921cd4bc6701cb1750eefed32270c18f355c268c"} Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.426167 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-prx9r" event={"ID":"1812a3e2-a8c1-476d-b0f0-44dac1e07899","Type":"ContainerDied","Data":"758dd0c801163ee3257ac5d03974cb5682d4e44d2451a5822b6febeadf93c8d7"} Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.426205 4980 scope.go:117] "RemoveContainer" containerID="9fd8ce2eff1b34c826dcdb09127f219ca36b7cd8cf631291d72289c58980af8c" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.426463 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-prx9r" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.428540 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" podStartSLOduration=3.428524795 podStartE2EDuration="3.428524795s" podCreationTimestamp="2026-01-07 03:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:50:36.427476882 +0000 UTC m=+1082.993171617" watchObservedRunningTime="2026-01-07 03:50:36.428524795 +0000 UTC m=+1082.994219530" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.449064 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37c5e78b-d013-4936-a6a0-639aff10ff45-scripts\") pod \"horizon-f6b9d9cdf-2t4tv\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.449159 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37c5e78b-d013-4936-a6a0-639aff10ff45-logs\") pod \"horizon-f6b9d9cdf-2t4tv\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.449188 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/37c5e78b-d013-4936-a6a0-639aff10ff45-horizon-secret-key\") pod \"horizon-f6b9d9cdf-2t4tv\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.449219 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37c5e78b-d013-4936-a6a0-639aff10ff45-config-data\") pod \"horizon-f6b9d9cdf-2t4tv\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.449240 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k4wq\" (UniqueName: \"kubernetes.io/projected/37c5e78b-d013-4936-a6a0-639aff10ff45-kube-api-access-7k4wq\") pod \"horizon-f6b9d9cdf-2t4tv\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.490680 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-prx9r"] Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.500002 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-prx9r"] Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.542624 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.542712 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.553800 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37c5e78b-d013-4936-a6a0-639aff10ff45-logs\") pod \"horizon-f6b9d9cdf-2t4tv\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.554298 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37c5e78b-d013-4936-a6a0-639aff10ff45-logs\") pod \"horizon-f6b9d9cdf-2t4tv\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.554384 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/37c5e78b-d013-4936-a6a0-639aff10ff45-horizon-secret-key\") pod \"horizon-f6b9d9cdf-2t4tv\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.555639 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37c5e78b-d013-4936-a6a0-639aff10ff45-config-data\") pod \"horizon-f6b9d9cdf-2t4tv\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.555686 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k4wq\" (UniqueName: \"kubernetes.io/projected/37c5e78b-d013-4936-a6a0-639aff10ff45-kube-api-access-7k4wq\") pod \"horizon-f6b9d9cdf-2t4tv\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.555805 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37c5e78b-d013-4936-a6a0-639aff10ff45-scripts\") pod \"horizon-f6b9d9cdf-2t4tv\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.556341 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37c5e78b-d013-4936-a6a0-639aff10ff45-scripts\") pod \"horizon-f6b9d9cdf-2t4tv\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.557305 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37c5e78b-d013-4936-a6a0-639aff10ff45-config-data\") pod \"horizon-f6b9d9cdf-2t4tv\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.558760 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/37c5e78b-d013-4936-a6a0-639aff10ff45-horizon-secret-key\") pod \"horizon-f6b9d9cdf-2t4tv\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.579815 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k4wq\" (UniqueName: \"kubernetes.io/projected/37c5e78b-d013-4936-a6a0-639aff10ff45-kube-api-access-7k4wq\") pod \"horizon-f6b9d9cdf-2t4tv\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:50:36 crc kubenswrapper[4980]: I0107 03:50:36.685418 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:50:37 crc kubenswrapper[4980]: I0107 03:50:37.289167 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f6b9d9cdf-2t4tv"] Jan 07 03:50:37 crc kubenswrapper[4980]: W0107 03:50:37.338278 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37c5e78b_d013_4936_a6a0_639aff10ff45.slice/crio-52d3a39fb0d8e4deed60674de64979a1c7295cd707e0ce6a0860c3d8ad4c7c70 WatchSource:0}: Error finding container 52d3a39fb0d8e4deed60674de64979a1c7295cd707e0ce6a0860c3d8ad4c7c70: Status 404 returned error can't find the container with id 52d3a39fb0d8e4deed60674de64979a1c7295cd707e0ce6a0860c3d8ad4c7c70 Jan 07 03:50:37 crc kubenswrapper[4980]: I0107 03:50:37.444931 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f6b9d9cdf-2t4tv" event={"ID":"37c5e78b-d013-4936-a6a0-639aff10ff45","Type":"ContainerStarted","Data":"52d3a39fb0d8e4deed60674de64979a1c7295cd707e0ce6a0860c3d8ad4c7c70"} Jan 07 03:50:37 crc kubenswrapper[4980]: I0107 03:50:37.454244 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ed558a32-6e6b-4c7e-85cb-bb4b33739e26","Type":"ContainerStarted","Data":"bfaea25b96a6f49f2d33bcb14a187fd532fff228084c7fb52d9a93bf0b45bf91"} Jan 07 03:50:37 crc kubenswrapper[4980]: I0107 03:50:37.454269 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ed558a32-6e6b-4c7e-85cb-bb4b33739e26" containerName="glance-log" containerID="cri-o://fc2106ea7ab31638ff93479e921cd4bc6701cb1750eefed32270c18f355c268c" gracePeriod=30 Jan 07 03:50:37 crc kubenswrapper[4980]: I0107 03:50:37.454505 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ed558a32-6e6b-4c7e-85cb-bb4b33739e26" containerName="glance-httpd" containerID="cri-o://bfaea25b96a6f49f2d33bcb14a187fd532fff228084c7fb52d9a93bf0b45bf91" gracePeriod=30 Jan 07 03:50:37 crc kubenswrapper[4980]: I0107 03:50:37.468348 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"18a5ba18-b849-43f4-aa55-72853d477092","Type":"ContainerStarted","Data":"b65c35c6b92560c20825dd16ac96d8e227a01011813a6a3245a15ab1567ba93c"} Jan 07 03:50:37 crc kubenswrapper[4980]: I0107 03:50:37.470637 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="18a5ba18-b849-43f4-aa55-72853d477092" containerName="glance-log" containerID="cri-o://b33d41de48a968d7f84207c3caa603878a416b959409771c1c50773b2c1bfb15" gracePeriod=30 Jan 07 03:50:37 crc kubenswrapper[4980]: I0107 03:50:37.471449 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="18a5ba18-b849-43f4-aa55-72853d477092" containerName="glance-httpd" containerID="cri-o://b65c35c6b92560c20825dd16ac96d8e227a01011813a6a3245a15ab1567ba93c" gracePeriod=30 Jan 07 03:50:37 crc kubenswrapper[4980]: I0107 03:50:37.501418 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.501396553 podStartE2EDuration="4.501396553s" podCreationTimestamp="2026-01-07 03:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:50:37.490819024 +0000 UTC m=+1084.056513769" watchObservedRunningTime="2026-01-07 03:50:37.501396553 +0000 UTC m=+1084.067091288" Jan 07 03:50:37 crc kubenswrapper[4980]: I0107 03:50:37.528173 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.528153738 podStartE2EDuration="4.528153738s" podCreationTimestamp="2026-01-07 03:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:50:37.522681807 +0000 UTC m=+1084.088376572" watchObservedRunningTime="2026-01-07 03:50:37.528153738 +0000 UTC m=+1084.093848473" Jan 07 03:50:37 crc kubenswrapper[4980]: I0107 03:50:37.746371 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1812a3e2-a8c1-476d-b0f0-44dac1e07899" path="/var/lib/kubelet/pods/1812a3e2-a8c1-476d-b0f0-44dac1e07899/volumes" Jan 07 03:50:38 crc kubenswrapper[4980]: I0107 03:50:38.492844 4980 generic.go:334] "Generic (PLEG): container finished" podID="ed558a32-6e6b-4c7e-85cb-bb4b33739e26" containerID="bfaea25b96a6f49f2d33bcb14a187fd532fff228084c7fb52d9a93bf0b45bf91" exitCode=0 Jan 07 03:50:38 crc kubenswrapper[4980]: I0107 03:50:38.493162 4980 generic.go:334] "Generic (PLEG): container finished" podID="ed558a32-6e6b-4c7e-85cb-bb4b33739e26" containerID="fc2106ea7ab31638ff93479e921cd4bc6701cb1750eefed32270c18f355c268c" exitCode=143 Jan 07 03:50:38 crc kubenswrapper[4980]: I0107 03:50:38.492915 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ed558a32-6e6b-4c7e-85cb-bb4b33739e26","Type":"ContainerDied","Data":"bfaea25b96a6f49f2d33bcb14a187fd532fff228084c7fb52d9a93bf0b45bf91"} Jan 07 03:50:38 crc kubenswrapper[4980]: I0107 03:50:38.493202 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ed558a32-6e6b-4c7e-85cb-bb4b33739e26","Type":"ContainerDied","Data":"fc2106ea7ab31638ff93479e921cd4bc6701cb1750eefed32270c18f355c268c"} Jan 07 03:50:38 crc kubenswrapper[4980]: I0107 03:50:38.495529 4980 generic.go:334] "Generic (PLEG): container finished" podID="18a5ba18-b849-43f4-aa55-72853d477092" containerID="b65c35c6b92560c20825dd16ac96d8e227a01011813a6a3245a15ab1567ba93c" exitCode=0 Jan 07 03:50:38 crc kubenswrapper[4980]: I0107 03:50:38.495576 4980 generic.go:334] "Generic (PLEG): container finished" podID="18a5ba18-b849-43f4-aa55-72853d477092" containerID="b33d41de48a968d7f84207c3caa603878a416b959409771c1c50773b2c1bfb15" exitCode=143 Jan 07 03:50:38 crc kubenswrapper[4980]: I0107 03:50:38.495591 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"18a5ba18-b849-43f4-aa55-72853d477092","Type":"ContainerDied","Data":"b65c35c6b92560c20825dd16ac96d8e227a01011813a6a3245a15ab1567ba93c"} Jan 07 03:50:38 crc kubenswrapper[4980]: I0107 03:50:38.495627 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"18a5ba18-b849-43f4-aa55-72853d477092","Type":"ContainerDied","Data":"b33d41de48a968d7f84207c3caa603878a416b959409771c1c50773b2c1bfb15"} Jan 07 03:50:39 crc kubenswrapper[4980]: I0107 03:50:39.511009 4980 generic.go:334] "Generic (PLEG): container finished" podID="2331397d-82df-4a69-80ff-9702b0fc66ce" containerID="a2369d40f6bf481bb3301fab5e969e1e3bfbe5c0c42747c89f8d42fc32114e25" exitCode=0 Jan 07 03:50:39 crc kubenswrapper[4980]: I0107 03:50:39.511086 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q7p9b" event={"ID":"2331397d-82df-4a69-80ff-9702b0fc66ce","Type":"ContainerDied","Data":"a2369d40f6bf481bb3301fab5e969e1e3bfbe5c0c42747c89f8d42fc32114e25"} Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.627915 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.637231 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.841394 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mwrd\" (UniqueName: \"kubernetes.io/projected/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-kube-api-access-7mwrd\") pod \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.841649 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-public-tls-certs\") pod \"18a5ba18-b849-43f4-aa55-72853d477092\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.841828 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-config-data\") pod \"18a5ba18-b849-43f4-aa55-72853d477092\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.841965 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"18a5ba18-b849-43f4-aa55-72853d477092\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.842040 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18a5ba18-b849-43f4-aa55-72853d477092-logs\") pod \"18a5ba18-b849-43f4-aa55-72853d477092\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.842092 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-httpd-run\") pod \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.842141 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-logs\") pod \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.842165 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-internal-tls-certs\") pod \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.842258 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-config-data\") pod \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.842373 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.842409 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-combined-ca-bundle\") pod \"18a5ba18-b849-43f4-aa55-72853d477092\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.842464 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18a5ba18-b849-43f4-aa55-72853d477092-httpd-run\") pod \"18a5ba18-b849-43f4-aa55-72853d477092\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.842502 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cxrk\" (UniqueName: \"kubernetes.io/projected/18a5ba18-b849-43f4-aa55-72853d477092-kube-api-access-6cxrk\") pod \"18a5ba18-b849-43f4-aa55-72853d477092\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.842593 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-combined-ca-bundle\") pod \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.842622 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-scripts\") pod \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\" (UID: \"ed558a32-6e6b-4c7e-85cb-bb4b33739e26\") " Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.842674 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-scripts\") pod \"18a5ba18-b849-43f4-aa55-72853d477092\" (UID: \"18a5ba18-b849-43f4-aa55-72853d477092\") " Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.842459 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18a5ba18-b849-43f4-aa55-72853d477092-logs" (OuterVolumeSpecName: "logs") pod "18a5ba18-b849-43f4-aa55-72853d477092" (UID: "18a5ba18-b849-43f4-aa55-72853d477092"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.846075 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18a5ba18-b849-43f4-aa55-72853d477092-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "18a5ba18-b849-43f4-aa55-72853d477092" (UID: "18a5ba18-b849-43f4-aa55-72853d477092"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.851333 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ed558a32-6e6b-4c7e-85cb-bb4b33739e26" (UID: "ed558a32-6e6b-4c7e-85cb-bb4b33739e26"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.851712 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-logs" (OuterVolumeSpecName: "logs") pod "ed558a32-6e6b-4c7e-85cb-bb4b33739e26" (UID: "ed558a32-6e6b-4c7e-85cb-bb4b33739e26"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.856118 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "18a5ba18-b849-43f4-aa55-72853d477092" (UID: "18a5ba18-b849-43f4-aa55-72853d477092"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.858159 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18a5ba18-b849-43f4-aa55-72853d477092-kube-api-access-6cxrk" (OuterVolumeSpecName: "kube-api-access-6cxrk") pod "18a5ba18-b849-43f4-aa55-72853d477092" (UID: "18a5ba18-b849-43f4-aa55-72853d477092"). InnerVolumeSpecName "kube-api-access-6cxrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.858966 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-scripts" (OuterVolumeSpecName: "scripts") pod "18a5ba18-b849-43f4-aa55-72853d477092" (UID: "18a5ba18-b849-43f4-aa55-72853d477092"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.865936 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "ed558a32-6e6b-4c7e-85cb-bb4b33739e26" (UID: "ed558a32-6e6b-4c7e-85cb-bb4b33739e26"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.866159 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-scripts" (OuterVolumeSpecName: "scripts") pod "ed558a32-6e6b-4c7e-85cb-bb4b33739e26" (UID: "ed558a32-6e6b-4c7e-85cb-bb4b33739e26"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.868387 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-kube-api-access-7mwrd" (OuterVolumeSpecName: "kube-api-access-7mwrd") pod "ed558a32-6e6b-4c7e-85cb-bb4b33739e26" (UID: "ed558a32-6e6b-4c7e-85cb-bb4b33739e26"). InnerVolumeSpecName "kube-api-access-7mwrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.944124 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18a5ba18-b849-43f4-aa55-72853d477092" (UID: "18a5ba18-b849-43f4-aa55-72853d477092"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.947085 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mwrd\" (UniqueName: \"kubernetes.io/projected/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-kube-api-access-7mwrd\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.947253 4980 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.947346 4980 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18a5ba18-b849-43f4-aa55-72853d477092-logs\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.947444 4980 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.947718 4980 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-logs\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.947832 4980 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.947927 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.948013 4980 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18a5ba18-b849-43f4-aa55-72853d477092-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.948108 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cxrk\" (UniqueName: \"kubernetes.io/projected/18a5ba18-b849-43f4-aa55-72853d477092-kube-api-access-6cxrk\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.948238 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.948336 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:41 crc kubenswrapper[4980]: I0107 03:50:41.985643 4980 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.007378 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "18a5ba18-b849-43f4-aa55-72853d477092" (UID: "18a5ba18-b849-43f4-aa55-72853d477092"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.013305 4980 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.033193 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7658bcf7b7-pdhtz"] Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.050470 4980 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.050504 4980 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.050514 4980 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.071232 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-config-data" (OuterVolumeSpecName: "config-data") pod "18a5ba18-b849-43f4-aa55-72853d477092" (UID: "18a5ba18-b849-43f4-aa55-72853d477092"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.071337 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-547b7ddd64-7hclw"] Jan 07 03:50:42 crc kubenswrapper[4980]: E0107 03:50:42.071894 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed558a32-6e6b-4c7e-85cb-bb4b33739e26" containerName="glance-log" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.071917 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed558a32-6e6b-4c7e-85cb-bb4b33739e26" containerName="glance-log" Jan 07 03:50:42 crc kubenswrapper[4980]: E0107 03:50:42.071954 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed558a32-6e6b-4c7e-85cb-bb4b33739e26" containerName="glance-httpd" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.071962 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed558a32-6e6b-4c7e-85cb-bb4b33739e26" containerName="glance-httpd" Jan 07 03:50:42 crc kubenswrapper[4980]: E0107 03:50:42.071976 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a5ba18-b849-43f4-aa55-72853d477092" containerName="glance-log" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.071985 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a5ba18-b849-43f4-aa55-72853d477092" containerName="glance-log" Jan 07 03:50:42 crc kubenswrapper[4980]: E0107 03:50:42.072016 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a5ba18-b849-43f4-aa55-72853d477092" containerName="glance-httpd" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.072023 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a5ba18-b849-43f4-aa55-72853d477092" containerName="glance-httpd" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.072259 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a5ba18-b849-43f4-aa55-72853d477092" containerName="glance-log" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.072290 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed558a32-6e6b-4c7e-85cb-bb4b33739e26" containerName="glance-log" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.072304 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed558a32-6e6b-4c7e-85cb-bb4b33739e26" containerName="glance-httpd" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.072319 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a5ba18-b849-43f4-aa55-72853d477092" containerName="glance-httpd" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.074079 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.075171 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed558a32-6e6b-4c7e-85cb-bb4b33739e26" (UID: "ed558a32-6e6b-4c7e-85cb-bb4b33739e26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.077222 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.090763 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-547b7ddd64-7hclw"] Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.102354 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ed558a32-6e6b-4c7e-85cb-bb4b33739e26" (UID: "ed558a32-6e6b-4c7e-85cb-bb4b33739e26"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.104180 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f6b9d9cdf-2t4tv"] Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.119896 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-config-data" (OuterVolumeSpecName: "config-data") pod "ed558a32-6e6b-4c7e-85cb-bb4b33739e26" (UID: "ed558a32-6e6b-4c7e-85cb-bb4b33739e26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.129033 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-565d4f6c4b-gj6mz"] Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.132654 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.152441 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-565d4f6c4b-gj6mz"] Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.155028 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a806c806-4d43-4a04-aefa-0544f2a5175f-config-data\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.155071 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a806c806-4d43-4a04-aefa-0544f2a5175f-logs\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.155104 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr6mb\" (UniqueName: \"kubernetes.io/projected/a806c806-4d43-4a04-aefa-0544f2a5175f-kube-api-access-nr6mb\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.155129 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d0304bc-69af-4a65-90e0-088a428990a1-scripts\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.155145 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d0304bc-69af-4a65-90e0-088a428990a1-combined-ca-bundle\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.155170 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a806c806-4d43-4a04-aefa-0544f2a5175f-horizon-tls-certs\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.155191 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a806c806-4d43-4a04-aefa-0544f2a5175f-combined-ca-bundle\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.155213 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a806c806-4d43-4a04-aefa-0544f2a5175f-scripts\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.155266 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5d0304bc-69af-4a65-90e0-088a428990a1-horizon-secret-key\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.155282 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwtbc\" (UniqueName: \"kubernetes.io/projected/5d0304bc-69af-4a65-90e0-088a428990a1-kube-api-access-lwtbc\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.155304 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d0304bc-69af-4a65-90e0-088a428990a1-horizon-tls-certs\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.155325 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d0304bc-69af-4a65-90e0-088a428990a1-config-data\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.155918 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d0304bc-69af-4a65-90e0-088a428990a1-logs\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.155987 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a806c806-4d43-4a04-aefa-0544f2a5175f-horizon-secret-key\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.156102 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18a5ba18-b849-43f4-aa55-72853d477092-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.156129 4980 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.156148 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.156161 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed558a32-6e6b-4c7e-85cb-bb4b33739e26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.257260 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d0304bc-69af-4a65-90e0-088a428990a1-logs\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.257315 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a806c806-4d43-4a04-aefa-0544f2a5175f-horizon-secret-key\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.257687 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a806c806-4d43-4a04-aefa-0544f2a5175f-config-data\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.257770 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a806c806-4d43-4a04-aefa-0544f2a5175f-logs\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.257850 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr6mb\" (UniqueName: \"kubernetes.io/projected/a806c806-4d43-4a04-aefa-0544f2a5175f-kube-api-access-nr6mb\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.257892 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d0304bc-69af-4a65-90e0-088a428990a1-scripts\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.257914 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d0304bc-69af-4a65-90e0-088a428990a1-combined-ca-bundle\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.257967 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a806c806-4d43-4a04-aefa-0544f2a5175f-horizon-tls-certs\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.258002 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a806c806-4d43-4a04-aefa-0544f2a5175f-combined-ca-bundle\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.258032 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a806c806-4d43-4a04-aefa-0544f2a5175f-scripts\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.258155 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5d0304bc-69af-4a65-90e0-088a428990a1-horizon-secret-key\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.258186 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwtbc\" (UniqueName: \"kubernetes.io/projected/5d0304bc-69af-4a65-90e0-088a428990a1-kube-api-access-lwtbc\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.258229 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d0304bc-69af-4a65-90e0-088a428990a1-horizon-tls-certs\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.258247 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a806c806-4d43-4a04-aefa-0544f2a5175f-logs\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.258260 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d0304bc-69af-4a65-90e0-088a428990a1-config-data\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.259300 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d0304bc-69af-4a65-90e0-088a428990a1-logs\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.260061 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d0304bc-69af-4a65-90e0-088a428990a1-scripts\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.260092 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a806c806-4d43-4a04-aefa-0544f2a5175f-config-data\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.260434 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a806c806-4d43-4a04-aefa-0544f2a5175f-scripts\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.260713 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d0304bc-69af-4a65-90e0-088a428990a1-config-data\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.263088 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5d0304bc-69af-4a65-90e0-088a428990a1-horizon-secret-key\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.263872 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a806c806-4d43-4a04-aefa-0544f2a5175f-horizon-secret-key\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.263892 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d0304bc-69af-4a65-90e0-088a428990a1-horizon-tls-certs\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.264067 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a806c806-4d43-4a04-aefa-0544f2a5175f-combined-ca-bundle\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.265503 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a806c806-4d43-4a04-aefa-0544f2a5175f-horizon-tls-certs\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.266690 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d0304bc-69af-4a65-90e0-088a428990a1-combined-ca-bundle\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.276206 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwtbc\" (UniqueName: \"kubernetes.io/projected/5d0304bc-69af-4a65-90e0-088a428990a1-kube-api-access-lwtbc\") pod \"horizon-565d4f6c4b-gj6mz\" (UID: \"5d0304bc-69af-4a65-90e0-088a428990a1\") " pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.277747 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr6mb\" (UniqueName: \"kubernetes.io/projected/a806c806-4d43-4a04-aefa-0544f2a5175f-kube-api-access-nr6mb\") pod \"horizon-547b7ddd64-7hclw\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.410034 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.457754 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.563792 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ed558a32-6e6b-4c7e-85cb-bb4b33739e26","Type":"ContainerDied","Data":"d3434693f460df2d06a7c61e4f8eb6f5d534c55c00e7800bcde8650d95a21dc0"} Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.563930 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.564219 4980 scope.go:117] "RemoveContainer" containerID="bfaea25b96a6f49f2d33bcb14a187fd532fff228084c7fb52d9a93bf0b45bf91" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.572655 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"18a5ba18-b849-43f4-aa55-72853d477092","Type":"ContainerDied","Data":"90e4d043c38bec4bb68d0248aea81852ade668a854976723360b7b59c5123c87"} Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.572670 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.630028 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.654685 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.690733 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.702063 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.717454 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.718946 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.722460 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.722709 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-d7gzb" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.723517 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.723712 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.728689 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.734459 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.737603 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.747124 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.747670 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.747991 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.886774 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35351b71-653b-4428-8ced-16202fce5e62-logs\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.886858 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.886882 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-logs\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.886905 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.886924 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.886944 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.886962 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.886980 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.886995 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35351b71-653b-4428-8ced-16202fce5e62-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.887012 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-scripts\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.887042 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.887063 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-config-data\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.887083 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.887120 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.887178 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff27g\" (UniqueName: \"kubernetes.io/projected/35351b71-653b-4428-8ced-16202fce5e62-kube-api-access-ff27g\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.887199 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkh7z\" (UniqueName: \"kubernetes.io/projected/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-kube-api-access-pkh7z\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.988863 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.988924 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-logs\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.988953 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.988972 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.988991 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.989010 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.989029 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.989043 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35351b71-653b-4428-8ced-16202fce5e62-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.989059 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-scripts\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.989090 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.989110 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-config-data\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.989132 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.989169 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.989215 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff27g\" (UniqueName: \"kubernetes.io/projected/35351b71-653b-4428-8ced-16202fce5e62-kube-api-access-ff27g\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.989233 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkh7z\" (UniqueName: \"kubernetes.io/projected/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-kube-api-access-pkh7z\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.989261 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35351b71-653b-4428-8ced-16202fce5e62-logs\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.989729 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35351b71-653b-4428-8ced-16202fce5e62-logs\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.990348 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35351b71-653b-4428-8ced-16202fce5e62-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.990417 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.990666 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-logs\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.991069 4980 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.991074 4980 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.995050 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.997441 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:42 crc kubenswrapper[4980]: I0107 03:50:42.998808 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:43 crc kubenswrapper[4980]: I0107 03:50:43.002862 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:43 crc kubenswrapper[4980]: I0107 03:50:43.005453 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:43 crc kubenswrapper[4980]: I0107 03:50:43.013083 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-scripts\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:43 crc kubenswrapper[4980]: I0107 03:50:43.022381 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff27g\" (UniqueName: \"kubernetes.io/projected/35351b71-653b-4428-8ced-16202fce5e62-kube-api-access-ff27g\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:43 crc kubenswrapper[4980]: I0107 03:50:43.023771 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:43 crc kubenswrapper[4980]: I0107 03:50:43.026740 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-config-data\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:43 crc kubenswrapper[4980]: I0107 03:50:43.039028 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " pod="openstack/glance-default-external-api-0" Jan 07 03:50:43 crc kubenswrapper[4980]: I0107 03:50:43.041007 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:43 crc kubenswrapper[4980]: I0107 03:50:43.048264 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkh7z\" (UniqueName: \"kubernetes.io/projected/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-kube-api-access-pkh7z\") pod \"glance-default-internal-api-0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:50:43 crc kubenswrapper[4980]: I0107 03:50:43.054222 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 07 03:50:43 crc kubenswrapper[4980]: I0107 03:50:43.068096 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 07 03:50:43 crc kubenswrapper[4980]: I0107 03:50:43.751099 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a5ba18-b849-43f4-aa55-72853d477092" path="/var/lib/kubelet/pods/18a5ba18-b849-43f4-aa55-72853d477092/volumes" Jan 07 03:50:43 crc kubenswrapper[4980]: I0107 03:50:43.752413 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed558a32-6e6b-4c7e-85cb-bb4b33739e26" path="/var/lib/kubelet/pods/ed558a32-6e6b-4c7e-85cb-bb4b33739e26/volumes" Jan 07 03:50:43 crc kubenswrapper[4980]: I0107 03:50:43.821927 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:50:43 crc kubenswrapper[4980]: I0107 03:50:43.945751 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-prhnz"] Jan 07 03:50:43 crc kubenswrapper[4980]: I0107 03:50:43.945997 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-prhnz" podUID="4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9" containerName="dnsmasq-dns" containerID="cri-o://2d8de6c6c26fcfb3dc380bfb0007f10d98f628ee967485923ae03c44e7c6f966" gracePeriod=10 Jan 07 03:50:44 crc kubenswrapper[4980]: I0107 03:50:44.625068 4980 generic.go:334] "Generic (PLEG): container finished" podID="4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9" containerID="2d8de6c6c26fcfb3dc380bfb0007f10d98f628ee967485923ae03c44e7c6f966" exitCode=0 Jan 07 03:50:44 crc kubenswrapper[4980]: I0107 03:50:44.625135 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-prhnz" event={"ID":"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9","Type":"ContainerDied","Data":"2d8de6c6c26fcfb3dc380bfb0007f10d98f628ee967485923ae03c44e7c6f966"} Jan 07 03:50:53 crc kubenswrapper[4980]: I0107 03:50:53.911637 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-prhnz" podUID="4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.140417 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.324427 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-fernet-keys\") pod \"2331397d-82df-4a69-80ff-9702b0fc66ce\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.324911 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-combined-ca-bundle\") pod \"2331397d-82df-4a69-80ff-9702b0fc66ce\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.325015 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-config-data\") pod \"2331397d-82df-4a69-80ff-9702b0fc66ce\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.325069 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-credential-keys\") pod \"2331397d-82df-4a69-80ff-9702b0fc66ce\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.325237 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-scripts\") pod \"2331397d-82df-4a69-80ff-9702b0fc66ce\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.325285 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srfw9\" (UniqueName: \"kubernetes.io/projected/2331397d-82df-4a69-80ff-9702b0fc66ce-kube-api-access-srfw9\") pod \"2331397d-82df-4a69-80ff-9702b0fc66ce\" (UID: \"2331397d-82df-4a69-80ff-9702b0fc66ce\") " Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.334210 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2331397d-82df-4a69-80ff-9702b0fc66ce" (UID: "2331397d-82df-4a69-80ff-9702b0fc66ce"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.334363 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-scripts" (OuterVolumeSpecName: "scripts") pod "2331397d-82df-4a69-80ff-9702b0fc66ce" (UID: "2331397d-82df-4a69-80ff-9702b0fc66ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.334824 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2331397d-82df-4a69-80ff-9702b0fc66ce-kube-api-access-srfw9" (OuterVolumeSpecName: "kube-api-access-srfw9") pod "2331397d-82df-4a69-80ff-9702b0fc66ce" (UID: "2331397d-82df-4a69-80ff-9702b0fc66ce"). InnerVolumeSpecName "kube-api-access-srfw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.334897 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2331397d-82df-4a69-80ff-9702b0fc66ce" (UID: "2331397d-82df-4a69-80ff-9702b0fc66ce"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.363717 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2331397d-82df-4a69-80ff-9702b0fc66ce" (UID: "2331397d-82df-4a69-80ff-9702b0fc66ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.369729 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-config-data" (OuterVolumeSpecName: "config-data") pod "2331397d-82df-4a69-80ff-9702b0fc66ce" (UID: "2331397d-82df-4a69-80ff-9702b0fc66ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.427022 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.427592 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srfw9\" (UniqueName: \"kubernetes.io/projected/2331397d-82df-4a69-80ff-9702b0fc66ce-kube-api-access-srfw9\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.427614 4980 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.427622 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.427631 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.427639 4980 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2331397d-82df-4a69-80ff-9702b0fc66ce-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.766620 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q7p9b" event={"ID":"2331397d-82df-4a69-80ff-9702b0fc66ce","Type":"ContainerDied","Data":"53c92743a92cbe99b0c8e44a40af1ba685c8c73c0f67092e9c76316072542514"} Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.766672 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53c92743a92cbe99b0c8e44a40af1ba685c8c73c0f67092e9c76316072542514" Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.766751 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q7p9b" Jan 07 03:50:58 crc kubenswrapper[4980]: I0107 03:50:58.912480 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-prhnz" podUID="4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.225689 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-q7p9b"] Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.232100 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-q7p9b"] Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.334519 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vxwsx"] Jan 07 03:50:59 crc kubenswrapper[4980]: E0107 03:50:59.334930 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2331397d-82df-4a69-80ff-9702b0fc66ce" containerName="keystone-bootstrap" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.334943 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="2331397d-82df-4a69-80ff-9702b0fc66ce" containerName="keystone-bootstrap" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.335113 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="2331397d-82df-4a69-80ff-9702b0fc66ce" containerName="keystone-bootstrap" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.335692 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.340539 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.340713 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.340818 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.340932 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.341121 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fsjw7" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.358492 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vxwsx"] Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.443521 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-fernet-keys\") pod \"keystone-bootstrap-vxwsx\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.444029 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84zbc\" (UniqueName: \"kubernetes.io/projected/d40c84e1-0c50-4439-bfbe-469ac096cbea-kube-api-access-84zbc\") pod \"keystone-bootstrap-vxwsx\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.444056 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-combined-ca-bundle\") pod \"keystone-bootstrap-vxwsx\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.444078 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-config-data\") pod \"keystone-bootstrap-vxwsx\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.444385 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-scripts\") pod \"keystone-bootstrap-vxwsx\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.444641 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-credential-keys\") pod \"keystone-bootstrap-vxwsx\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.547438 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-credential-keys\") pod \"keystone-bootstrap-vxwsx\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.547655 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-fernet-keys\") pod \"keystone-bootstrap-vxwsx\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.547737 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84zbc\" (UniqueName: \"kubernetes.io/projected/d40c84e1-0c50-4439-bfbe-469ac096cbea-kube-api-access-84zbc\") pod \"keystone-bootstrap-vxwsx\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.547770 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-combined-ca-bundle\") pod \"keystone-bootstrap-vxwsx\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.547864 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-config-data\") pod \"keystone-bootstrap-vxwsx\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.547961 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-scripts\") pod \"keystone-bootstrap-vxwsx\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.552791 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-config-data\") pod \"keystone-bootstrap-vxwsx\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.553214 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-combined-ca-bundle\") pod \"keystone-bootstrap-vxwsx\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.554595 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-scripts\") pod \"keystone-bootstrap-vxwsx\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.554921 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-credential-keys\") pod \"keystone-bootstrap-vxwsx\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.566076 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-fernet-keys\") pod \"keystone-bootstrap-vxwsx\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.567109 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84zbc\" (UniqueName: \"kubernetes.io/projected/d40c84e1-0c50-4439-bfbe-469ac096cbea-kube-api-access-84zbc\") pod \"keystone-bootstrap-vxwsx\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.667421 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.748490 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2331397d-82df-4a69-80ff-9702b0fc66ce" path="/var/lib/kubelet/pods/2331397d-82df-4a69-80ff-9702b0fc66ce/volumes" Jan 07 03:50:59 crc kubenswrapper[4980]: E0107 03:50:59.847233 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 07 03:50:59 crc kubenswrapper[4980]: E0107 03:50:59.847445 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n667h54chc6h5b9hbbh75h58h5dh97h5fh5f5h5d4hc7h5bh58fhc7hffh58fh5bch7fh668h68hchdhb7hc4hb9h57chc9h556h96h54q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7k4wq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-f6b9d9cdf-2t4tv_openstack(37c5e78b-d013-4936-a6a0-639aff10ff45): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 07 03:50:59 crc kubenswrapper[4980]: E0107 03:50:59.856388 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 07 03:50:59 crc kubenswrapper[4980]: E0107 03:50:59.856519 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gwjmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-bng92_openstack(bc47b5ba-b1a8-4615-be80-db3ae1580399): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 07 03:50:59 crc kubenswrapper[4980]: E0107 03:50:59.858197 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-bng92" podUID="bc47b5ba-b1a8-4615-be80-db3ae1580399" Jan 07 03:50:59 crc kubenswrapper[4980]: E0107 03:50:59.885752 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 07 03:50:59 crc kubenswrapper[4980]: E0107 03:50:59.885975 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n675h87hb5hbbh67bh584h68ch684hfh95h54dh568h5d5hc4hc9h75hdfh5bch586h5h65h6hdch5bbhdch7h666h588h694h5dfh5d9h675q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-99klz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7658bcf7b7-pdhtz_openstack(02ac366d-3498-4985-b9d8-5d145f5c3048): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 07 03:50:59 crc kubenswrapper[4980]: I0107 03:50:59.926726 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.058155 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-ovsdbserver-sb\") pod \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.058229 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-config\") pod \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.058255 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-ovsdbserver-nb\") pod \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.058349 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-dns-svc\") pod \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.058484 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcrvh\" (UniqueName: \"kubernetes.io/projected/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-kube-api-access-fcrvh\") pod \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\" (UID: \"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9\") " Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.067888 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-kube-api-access-fcrvh" (OuterVolumeSpecName: "kube-api-access-fcrvh") pod "4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9" (UID: "4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9"). InnerVolumeSpecName "kube-api-access-fcrvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.099746 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9" (UID: "4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.105392 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9" (UID: "4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.108296 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-config" (OuterVolumeSpecName: "config") pod "4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9" (UID: "4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.117335 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9" (UID: "4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.160964 4980 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.161006 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcrvh\" (UniqueName: \"kubernetes.io/projected/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-kube-api-access-fcrvh\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.161365 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.161399 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.161412 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.800660 4980 generic.go:334] "Generic (PLEG): container finished" podID="d250b696-8adc-41af-8a3c-36c7a87a721f" containerID="b247233d37522498e864a29b26636f8e7437eae0bd7181c80835032b566449b3" exitCode=0 Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.800772 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-t6522" event={"ID":"d250b696-8adc-41af-8a3c-36c7a87a721f","Type":"ContainerDied","Data":"b247233d37522498e864a29b26636f8e7437eae0bd7181c80835032b566449b3"} Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.803616 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-prhnz" Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.803648 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-prhnz" event={"ID":"4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9","Type":"ContainerDied","Data":"af736be5ed371a634cfd3faffde7dee1ae6c1e274f2e904a38c1f971c66fca29"} Jan 07 03:51:00 crc kubenswrapper[4980]: E0107 03:51:00.839400 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-bng92" podUID="bc47b5ba-b1a8-4615-be80-db3ae1580399" Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.873691 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-prhnz"] Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.885068 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-prhnz"] Jan 07 03:51:00 crc kubenswrapper[4980]: I0107 03:51:00.913514 4980 scope.go:117] "RemoveContainer" containerID="fc2106ea7ab31638ff93479e921cd4bc6701cb1750eefed32270c18f355c268c" Jan 07 03:51:00 crc kubenswrapper[4980]: E0107 03:51:00.918445 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 07 03:51:00 crc kubenswrapper[4980]: E0107 03:51:00.918731 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mhc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-b9flf_openstack(2df19fa0-d6ce-4539-8f87-9d6935314e82): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 07 03:51:00 crc kubenswrapper[4980]: E0107 03:51:00.920315 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-b9flf" podUID="2df19fa0-d6ce-4539-8f87-9d6935314e82" Jan 07 03:51:01 crc kubenswrapper[4980]: E0107 03:51:01.342896 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 07 03:51:01 crc kubenswrapper[4980]: E0107 03:51:01.343113 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67bh65ch7bh59bh68bh58bhc7h666h5bbh6dh9ch5dch87h65bh54fh656h666h57dh65fh5dh6ch597hcch57bhc5h5d6h564h56bh6bh5fh7ch5b6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8w2ql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(223a79c7-9460-4b16-a65d-90e2c0751dfa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 07 03:51:01 crc kubenswrapper[4980]: I0107 03:51:01.385735 4980 scope.go:117] "RemoveContainer" containerID="b65c35c6b92560c20825dd16ac96d8e227a01011813a6a3245a15ab1567ba93c" Jan 07 03:51:01 crc kubenswrapper[4980]: I0107 03:51:01.597539 4980 scope.go:117] "RemoveContainer" containerID="b33d41de48a968d7f84207c3caa603878a416b959409771c1c50773b2c1bfb15" Jan 07 03:51:01 crc kubenswrapper[4980]: I0107 03:51:01.680807 4980 scope.go:117] "RemoveContainer" containerID="2d8de6c6c26fcfb3dc380bfb0007f10d98f628ee967485923ae03c44e7c6f966" Jan 07 03:51:01 crc kubenswrapper[4980]: I0107 03:51:01.707877 4980 scope.go:117] "RemoveContainer" containerID="a82e2ee8c12dfd7a9076ec0fcf1d8b1b8d9b83145be79fbee6cf8efbd1368a37" Jan 07 03:51:01 crc kubenswrapper[4980]: I0107 03:51:01.762183 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9" path="/var/lib/kubelet/pods/4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9/volumes" Jan 07 03:51:01 crc kubenswrapper[4980]: E0107 03:51:01.888993 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/horizon-7658bcf7b7-pdhtz" podUID="02ac366d-3498-4985-b9d8-5d145f5c3048" Jan 07 03:51:01 crc kubenswrapper[4980]: E0107 03:51:01.889160 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/horizon-f6b9d9cdf-2t4tv" podUID="37c5e78b-d013-4936-a6a0-639aff10ff45" Jan 07 03:51:01 crc kubenswrapper[4980]: I0107 03:51:01.912920 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6fd86d65-8ldnw" event={"ID":"9d74f1e6-2229-4ff7-8c80-b12d09285da4","Type":"ContainerStarted","Data":"d854ce87110918804b3f8fc8e714927f1435ad0a9afebfcce22347b43a9298a5"} Jan 07 03:51:01 crc kubenswrapper[4980]: I0107 03:51:01.922373 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f6b9d9cdf-2t4tv" podUID="37c5e78b-d013-4936-a6a0-639aff10ff45" containerName="horizon" containerID="cri-o://cc9e12ea308d8375f0fe6ad1b6e4dcae421ce6d774c815dcea6917af5b4efa45" gracePeriod=30 Jan 07 03:51:01 crc kubenswrapper[4980]: I0107 03:51:01.985200 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7658bcf7b7-pdhtz" podUID="02ac366d-3498-4985-b9d8-5d145f5c3048" containerName="horizon" containerID="cri-o://e78489d207f8bf1ba21e08c0ed11d851a91df66573a02c9e5447f50c20ea30d9" gracePeriod=30 Jan 07 03:51:02 crc kubenswrapper[4980]: E0107 03:51:02.001034 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-b9flf" podUID="2df19fa0-d6ce-4539-8f87-9d6935314e82" Jan 07 03:51:02 crc kubenswrapper[4980]: I0107 03:51:02.036816 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-565d4f6c4b-gj6mz"] Jan 07 03:51:02 crc kubenswrapper[4980]: I0107 03:51:02.154764 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-547b7ddd64-7hclw"] Jan 07 03:51:02 crc kubenswrapper[4980]: W0107 03:51:02.160697 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda806c806_4d43_4a04_aefa_0544f2a5175f.slice/crio-32236fa005cbe821581b4f8f9bcf30d0c456a66b7437388b772ee50c3404ac33 WatchSource:0}: Error finding container 32236fa005cbe821581b4f8f9bcf30d0c456a66b7437388b772ee50c3404ac33: Status 404 returned error can't find the container with id 32236fa005cbe821581b4f8f9bcf30d0c456a66b7437388b772ee50c3404ac33 Jan 07 03:51:02 crc kubenswrapper[4980]: I0107 03:51:02.250744 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 07 03:51:02 crc kubenswrapper[4980]: I0107 03:51:02.353819 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vxwsx"] Jan 07 03:51:02 crc kubenswrapper[4980]: W0107 03:51:02.374205 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd40c84e1_0c50_4439_bfbe_469ac096cbea.slice/crio-9b8d8c6740883b4bb9116e8da654e7ab56992af2b288801a825ad60b5a56b059 WatchSource:0}: Error finding container 9b8d8c6740883b4bb9116e8da654e7ab56992af2b288801a825ad60b5a56b059: Status 404 returned error can't find the container with id 9b8d8c6740883b4bb9116e8da654e7ab56992af2b288801a825ad60b5a56b059 Jan 07 03:51:02 crc kubenswrapper[4980]: I0107 03:51:02.386218 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 07 03:51:02 crc kubenswrapper[4980]: I0107 03:51:02.397018 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-t6522" Jan 07 03:51:02 crc kubenswrapper[4980]: I0107 03:51:02.513677 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d250b696-8adc-41af-8a3c-36c7a87a721f-config\") pod \"d250b696-8adc-41af-8a3c-36c7a87a721f\" (UID: \"d250b696-8adc-41af-8a3c-36c7a87a721f\") " Jan 07 03:51:02 crc kubenswrapper[4980]: I0107 03:51:02.513721 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbpjb\" (UniqueName: \"kubernetes.io/projected/d250b696-8adc-41af-8a3c-36c7a87a721f-kube-api-access-xbpjb\") pod \"d250b696-8adc-41af-8a3c-36c7a87a721f\" (UID: \"d250b696-8adc-41af-8a3c-36c7a87a721f\") " Jan 07 03:51:02 crc kubenswrapper[4980]: I0107 03:51:02.513879 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d250b696-8adc-41af-8a3c-36c7a87a721f-combined-ca-bundle\") pod \"d250b696-8adc-41af-8a3c-36c7a87a721f\" (UID: \"d250b696-8adc-41af-8a3c-36c7a87a721f\") " Jan 07 03:51:02 crc kubenswrapper[4980]: I0107 03:51:02.518208 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d250b696-8adc-41af-8a3c-36c7a87a721f-kube-api-access-xbpjb" (OuterVolumeSpecName: "kube-api-access-xbpjb") pod "d250b696-8adc-41af-8a3c-36c7a87a721f" (UID: "d250b696-8adc-41af-8a3c-36c7a87a721f"). InnerVolumeSpecName "kube-api-access-xbpjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:51:02 crc kubenswrapper[4980]: I0107 03:51:02.519172 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbpjb\" (UniqueName: \"kubernetes.io/projected/d250b696-8adc-41af-8a3c-36c7a87a721f-kube-api-access-xbpjb\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:02 crc kubenswrapper[4980]: I0107 03:51:02.570047 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d250b696-8adc-41af-8a3c-36c7a87a721f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d250b696-8adc-41af-8a3c-36c7a87a721f" (UID: "d250b696-8adc-41af-8a3c-36c7a87a721f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:02 crc kubenswrapper[4980]: I0107 03:51:02.597349 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d250b696-8adc-41af-8a3c-36c7a87a721f-config" (OuterVolumeSpecName: "config") pod "d250b696-8adc-41af-8a3c-36c7a87a721f" (UID: "d250b696-8adc-41af-8a3c-36c7a87a721f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:02 crc kubenswrapper[4980]: I0107 03:51:02.620683 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d250b696-8adc-41af-8a3c-36c7a87a721f-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:02 crc kubenswrapper[4980]: I0107 03:51:02.620704 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d250b696-8adc-41af-8a3c-36c7a87a721f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.008812 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f6b9d9cdf-2t4tv" event={"ID":"37c5e78b-d013-4936-a6a0-639aff10ff45","Type":"ContainerStarted","Data":"cc9e12ea308d8375f0fe6ad1b6e4dcae421ce6d774c815dcea6917af5b4efa45"} Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.012480 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9f6jz" event={"ID":"ea978d21-b28c-4714-8f07-b70f84f0efa8","Type":"ContainerStarted","Data":"d28d5b27a74017cec149329a4e1880bc4320a466663b3ad333cf8d7fdfc3ddf4"} Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.016544 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7658bcf7b7-pdhtz" event={"ID":"02ac366d-3498-4985-b9d8-5d145f5c3048","Type":"ContainerStarted","Data":"e78489d207f8bf1ba21e08c0ed11d851a91df66573a02c9e5447f50c20ea30d9"} Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.020450 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vxwsx" event={"ID":"d40c84e1-0c50-4439-bfbe-469ac096cbea","Type":"ContainerStarted","Data":"015140773c7c52137c3f3ea9229344b379a27a4e74591ccaf71c5b0834e11658"} Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.020476 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vxwsx" event={"ID":"d40c84e1-0c50-4439-bfbe-469ac096cbea","Type":"ContainerStarted","Data":"9b8d8c6740883b4bb9116e8da654e7ab56992af2b288801a825ad60b5a56b059"} Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.024055 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6fd86d65-8ldnw" event={"ID":"9d74f1e6-2229-4ff7-8c80-b12d09285da4","Type":"ContainerStarted","Data":"409f35ad2c7db689e8e29f2186a2f3769d40d2f3e43d63d026aa647611f6a592"} Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.024153 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f6fd86d65-8ldnw" podUID="9d74f1e6-2229-4ff7-8c80-b12d09285da4" containerName="horizon-log" containerID="cri-o://d854ce87110918804b3f8fc8e714927f1435ad0a9afebfcce22347b43a9298a5" gracePeriod=30 Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.024357 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f6fd86d65-8ldnw" podUID="9d74f1e6-2229-4ff7-8c80-b12d09285da4" containerName="horizon" containerID="cri-o://409f35ad2c7db689e8e29f2186a2f3769d40d2f3e43d63d026aa647611f6a592" gracePeriod=30 Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.032771 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547b7ddd64-7hclw" event={"ID":"a806c806-4d43-4a04-aefa-0544f2a5175f","Type":"ContainerStarted","Data":"cd80c28616f3a7a48c0607d28c1a8354eb753e72520c1bfc5ee438419f2ad8e7"} Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.032813 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547b7ddd64-7hclw" event={"ID":"a806c806-4d43-4a04-aefa-0544f2a5175f","Type":"ContainerStarted","Data":"13a8448e218af0221ddaf7496f2b718343f57d58d232fbf8f3b7827312911de5"} Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.032824 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547b7ddd64-7hclw" event={"ID":"a806c806-4d43-4a04-aefa-0544f2a5175f","Type":"ContainerStarted","Data":"32236fa005cbe821581b4f8f9bcf30d0c456a66b7437388b772ee50c3404ac33"} Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.037515 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-565d4f6c4b-gj6mz" event={"ID":"5d0304bc-69af-4a65-90e0-088a428990a1","Type":"ContainerStarted","Data":"6c1be0caa9308194f355b73d29b9e7ca9aef43f045c1a3b3df798816c7c32a96"} Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.037566 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-565d4f6c4b-gj6mz" event={"ID":"5d0304bc-69af-4a65-90e0-088a428990a1","Type":"ContainerStarted","Data":"e40e1e255cd02ca3cc68d959f60d386deb1df546536d740dee462affd483b014"} Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.037577 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-565d4f6c4b-gj6mz" event={"ID":"5d0304bc-69af-4a65-90e0-088a428990a1","Type":"ContainerStarted","Data":"771ed4a42fa9ad8bf5c86ccea65d875639cf9b8ca3f588bbc8303d0470f7cf8d"} Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.040586 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0","Type":"ContainerStarted","Data":"1a6eecc54b5d93c73d5e1b0c6d570821c6822de055b9416e43bacfa4b0e81425"} Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.041555 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-t6522" event={"ID":"d250b696-8adc-41af-8a3c-36c7a87a721f","Type":"ContainerDied","Data":"a2ec6070b20d8becdd899df363aec8d7d8c9d14ddefdfa269c53c5d01b6a7a75"} Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.041589 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2ec6070b20d8becdd899df363aec8d7d8c9d14ddefdfa269c53c5d01b6a7a75" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.041631 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-t6522" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.044108 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"35351b71-653b-4428-8ced-16202fce5e62","Type":"ContainerStarted","Data":"5c7e991e1113a15b46f9dde4f43897affd5dc72297b9b29557c468b7b30b2d86"} Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.054107 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-9f6jz" podStartSLOduration=4.414427164 podStartE2EDuration="31.054088758s" podCreationTimestamp="2026-01-07 03:50:32 +0000 UTC" firstStartedPulling="2026-01-07 03:50:34.692363255 +0000 UTC m=+1081.258057990" lastFinishedPulling="2026-01-07 03:51:01.332024849 +0000 UTC m=+1107.897719584" observedRunningTime="2026-01-07 03:51:03.033343921 +0000 UTC m=+1109.599038676" watchObservedRunningTime="2026-01-07 03:51:03.054088758 +0000 UTC m=+1109.619783493" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.062140 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vxwsx" podStartSLOduration=4.062118509 podStartE2EDuration="4.062118509s" podCreationTimestamp="2026-01-07 03:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:51:03.055356188 +0000 UTC m=+1109.621050913" watchObservedRunningTime="2026-01-07 03:51:03.062118509 +0000 UTC m=+1109.627813244" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.124426 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-547b7ddd64-7hclw" podStartSLOduration=22.124405562 podStartE2EDuration="22.124405562s" podCreationTimestamp="2026-01-07 03:50:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:51:03.090586827 +0000 UTC m=+1109.656281562" watchObservedRunningTime="2026-01-07 03:51:03.124405562 +0000 UTC m=+1109.690100297" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.135600 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6f6fd86d65-8ldnw" podStartSLOduration=4.322229689 podStartE2EDuration="31.135583381s" podCreationTimestamp="2026-01-07 03:50:32 +0000 UTC" firstStartedPulling="2026-01-07 03:50:34.513710522 +0000 UTC m=+1081.079405247" lastFinishedPulling="2026-01-07 03:51:01.327064204 +0000 UTC m=+1107.892758939" observedRunningTime="2026-01-07 03:51:03.115798464 +0000 UTC m=+1109.681493209" watchObservedRunningTime="2026-01-07 03:51:03.135583381 +0000 UTC m=+1109.701278116" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.151780 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-565d4f6c4b-gj6mz" podStartSLOduration=21.151765787 podStartE2EDuration="21.151765787s" podCreationTimestamp="2026-01-07 03:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:51:03.151264031 +0000 UTC m=+1109.716958766" watchObservedRunningTime="2026-01-07 03:51:03.151765787 +0000 UTC m=+1109.717460522" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.228168 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.250772 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-tbmx2"] Jan 07 03:51:03 crc kubenswrapper[4980]: E0107 03:51:03.252224 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9" containerName="dnsmasq-dns" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.252242 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9" containerName="dnsmasq-dns" Jan 07 03:51:03 crc kubenswrapper[4980]: E0107 03:51:03.252254 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d250b696-8adc-41af-8a3c-36c7a87a721f" containerName="neutron-db-sync" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.252260 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="d250b696-8adc-41af-8a3c-36c7a87a721f" containerName="neutron-db-sync" Jan 07 03:51:03 crc kubenswrapper[4980]: E0107 03:51:03.252403 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9" containerName="init" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.252412 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9" containerName="init" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.253002 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="d250b696-8adc-41af-8a3c-36c7a87a721f" containerName="neutron-db-sync" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.253021 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9" containerName="dnsmasq-dns" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.254916 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.257420 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7fd46fb64b-j8mm8"] Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.258784 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.260880 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.261168 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.261421 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.262146 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-bd7cc" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.266625 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7fd46fb64b-j8mm8"] Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.270824 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-tbmx2"] Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.342443 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-ovndb-tls-certs\") pod \"neutron-7fd46fb64b-j8mm8\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.343017 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-tbmx2\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.343100 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-tbmx2\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.343198 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rpcr\" (UniqueName: \"kubernetes.io/projected/60adb166-cc77-4a92-833b-59621ae07155-kube-api-access-8rpcr\") pod \"dnsmasq-dns-55f844cf75-tbmx2\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.343302 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-tbmx2\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.343379 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6c47\" (UniqueName: \"kubernetes.io/projected/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-kube-api-access-c6c47\") pod \"neutron-7fd46fb64b-j8mm8\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.343489 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-config\") pod \"neutron-7fd46fb64b-j8mm8\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.343648 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-combined-ca-bundle\") pod \"neutron-7fd46fb64b-j8mm8\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.343728 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-config\") pod \"dnsmasq-dns-55f844cf75-tbmx2\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.343823 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-dns-svc\") pod \"dnsmasq-dns-55f844cf75-tbmx2\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.343920 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-httpd-config\") pod \"neutron-7fd46fb64b-j8mm8\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.446240 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-dns-svc\") pod \"dnsmasq-dns-55f844cf75-tbmx2\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.446337 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-httpd-config\") pod \"neutron-7fd46fb64b-j8mm8\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.446375 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-ovndb-tls-certs\") pod \"neutron-7fd46fb64b-j8mm8\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.446392 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-tbmx2\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.446424 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-tbmx2\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.446464 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rpcr\" (UniqueName: \"kubernetes.io/projected/60adb166-cc77-4a92-833b-59621ae07155-kube-api-access-8rpcr\") pod \"dnsmasq-dns-55f844cf75-tbmx2\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.446516 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-tbmx2\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.446538 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6c47\" (UniqueName: \"kubernetes.io/projected/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-kube-api-access-c6c47\") pod \"neutron-7fd46fb64b-j8mm8\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.446606 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-config\") pod \"neutron-7fd46fb64b-j8mm8\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.446628 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-combined-ca-bundle\") pod \"neutron-7fd46fb64b-j8mm8\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.446663 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-config\") pod \"dnsmasq-dns-55f844cf75-tbmx2\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.447709 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-config\") pod \"dnsmasq-dns-55f844cf75-tbmx2\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.447771 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-dns-svc\") pod \"dnsmasq-dns-55f844cf75-tbmx2\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.448302 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-tbmx2\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.452172 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-tbmx2\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.452344 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-tbmx2\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.465807 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-ovndb-tls-certs\") pod \"neutron-7fd46fb64b-j8mm8\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.467754 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-httpd-config\") pod \"neutron-7fd46fb64b-j8mm8\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.475335 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-combined-ca-bundle\") pod \"neutron-7fd46fb64b-j8mm8\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.476354 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rpcr\" (UniqueName: \"kubernetes.io/projected/60adb166-cc77-4a92-833b-59621ae07155-kube-api-access-8rpcr\") pod \"dnsmasq-dns-55f844cf75-tbmx2\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.477643 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6c47\" (UniqueName: \"kubernetes.io/projected/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-kube-api-access-c6c47\") pod \"neutron-7fd46fb64b-j8mm8\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.478033 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-config\") pod \"neutron-7fd46fb64b-j8mm8\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.491198 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.597597 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.621395 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:03 crc kubenswrapper[4980]: I0107 03:51:03.912753 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-prhnz" podUID="4e3ab5a9-26d4-451c-ab15-8f8aae9e17d9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 07 03:51:04 crc kubenswrapper[4980]: I0107 03:51:04.077736 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0","Type":"ContainerStarted","Data":"17de836762208ae15920ea22b9836e1567d254d8d296e43e52ac798bb595a9a1"} Jan 07 03:51:04 crc kubenswrapper[4980]: I0107 03:51:04.081449 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"35351b71-653b-4428-8ced-16202fce5e62","Type":"ContainerStarted","Data":"cca09f3c7a1803a2e3add2b5b583ee11a0a446886476b6e3b81db5147efc7fc0"} Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:04.434431 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7fd46fb64b-j8mm8"] Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:04.452847 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-tbmx2"] Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.093973 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fd46fb64b-j8mm8" event={"ID":"c323500f-74e7-47cd-b4fd-ab15b5fedfb5","Type":"ContainerStarted","Data":"ab5fcbb72fb4bcd5c79c1c11194a2d5ce3c022d9cb9b6b93f3ce22715fa6a6eb"} Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.094315 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fd46fb64b-j8mm8" event={"ID":"c323500f-74e7-47cd-b4fd-ab15b5fedfb5","Type":"ContainerStarted","Data":"78f09e661230db9ab52a61557632ed4a9e62ccca5a7089036a8d86482dad935c"} Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.095933 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"223a79c7-9460-4b16-a65d-90e2c0751dfa","Type":"ContainerStarted","Data":"b9934fc35eb23d126049fc2556100181967f389cadf8abbce1143ebbbccd096c"} Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.097360 4980 generic.go:334] "Generic (PLEG): container finished" podID="60adb166-cc77-4a92-833b-59621ae07155" containerID="b3577780f33f804a49397e1954350b7a46dbad74774040ae259e3837b66eca46" exitCode=0 Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.097385 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" event={"ID":"60adb166-cc77-4a92-833b-59621ae07155","Type":"ContainerDied","Data":"b3577780f33f804a49397e1954350b7a46dbad74774040ae259e3837b66eca46"} Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.097398 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" event={"ID":"60adb166-cc77-4a92-833b-59621ae07155","Type":"ContainerStarted","Data":"2263c1ee3bbbb74519369b63a2166ce2712cef2dbe14b10b5a2de45f0143458c"} Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.443719 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5ffd847cb9-kf6tb"] Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.445336 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.450167 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.450326 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.498436 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5ffd847cb9-kf6tb"] Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.510626 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6369f2cd-b133-42d0-bac5-f4790bf08ae5-public-tls-certs\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.510869 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6369f2cd-b133-42d0-bac5-f4790bf08ae5-httpd-config\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.510981 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6369f2cd-b133-42d0-bac5-f4790bf08ae5-ovndb-tls-certs\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.511004 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6369f2cd-b133-42d0-bac5-f4790bf08ae5-config\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.511048 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6369f2cd-b133-42d0-bac5-f4790bf08ae5-combined-ca-bundle\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.511116 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwl8j\" (UniqueName: \"kubernetes.io/projected/6369f2cd-b133-42d0-bac5-f4790bf08ae5-kube-api-access-kwl8j\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.511277 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6369f2cd-b133-42d0-bac5-f4790bf08ae5-internal-tls-certs\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.615856 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6369f2cd-b133-42d0-bac5-f4790bf08ae5-public-tls-certs\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.616260 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6369f2cd-b133-42d0-bac5-f4790bf08ae5-httpd-config\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.616292 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6369f2cd-b133-42d0-bac5-f4790bf08ae5-ovndb-tls-certs\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.616314 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6369f2cd-b133-42d0-bac5-f4790bf08ae5-config\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.616339 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6369f2cd-b133-42d0-bac5-f4790bf08ae5-combined-ca-bundle\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.616361 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwl8j\" (UniqueName: \"kubernetes.io/projected/6369f2cd-b133-42d0-bac5-f4790bf08ae5-kube-api-access-kwl8j\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.616413 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6369f2cd-b133-42d0-bac5-f4790bf08ae5-internal-tls-certs\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.622501 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6369f2cd-b133-42d0-bac5-f4790bf08ae5-internal-tls-certs\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.630617 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6369f2cd-b133-42d0-bac5-f4790bf08ae5-combined-ca-bundle\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.631160 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6369f2cd-b133-42d0-bac5-f4790bf08ae5-httpd-config\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.631680 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6369f2cd-b133-42d0-bac5-f4790bf08ae5-ovndb-tls-certs\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.632794 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6369f2cd-b133-42d0-bac5-f4790bf08ae5-public-tls-certs\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.633332 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6369f2cd-b133-42d0-bac5-f4790bf08ae5-config\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.644326 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwl8j\" (UniqueName: \"kubernetes.io/projected/6369f2cd-b133-42d0-bac5-f4790bf08ae5-kube-api-access-kwl8j\") pod \"neutron-5ffd847cb9-kf6tb\" (UID: \"6369f2cd-b133-42d0-bac5-f4790bf08ae5\") " pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:05 crc kubenswrapper[4980]: I0107 03:51:05.804710 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:06 crc kubenswrapper[4980]: I0107 03:51:06.122247 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fd46fb64b-j8mm8" event={"ID":"c323500f-74e7-47cd-b4fd-ab15b5fedfb5","Type":"ContainerStarted","Data":"ebd13c614b28d044d8712f567fe7b9441d275aea157ae3f8244bbfd9e0ca64b5"} Jan 07 03:51:06 crc kubenswrapper[4980]: I0107 03:51:06.122601 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:06 crc kubenswrapper[4980]: I0107 03:51:06.134622 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0","Type":"ContainerStarted","Data":"8baa92e6dd0791333bf2483745673cd8a263959940c42a2d3c407410b2c97cdb"} Jan 07 03:51:06 crc kubenswrapper[4980]: I0107 03:51:06.146111 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7fd46fb64b-j8mm8" podStartSLOduration=3.146098042 podStartE2EDuration="3.146098042s" podCreationTimestamp="2026-01-07 03:51:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:51:06.145315197 +0000 UTC m=+1112.711009922" watchObservedRunningTime="2026-01-07 03:51:06.146098042 +0000 UTC m=+1112.711792897" Jan 07 03:51:06 crc kubenswrapper[4980]: I0107 03:51:06.148292 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" event={"ID":"60adb166-cc77-4a92-833b-59621ae07155","Type":"ContainerStarted","Data":"89d6786902dd709102cd3e7626e49ec8ad82df65d0ecd95b99bac9bd63ad8ca1"} Jan 07 03:51:06 crc kubenswrapper[4980]: I0107 03:51:06.149061 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:06 crc kubenswrapper[4980]: I0107 03:51:06.153020 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"35351b71-653b-4428-8ced-16202fce5e62","Type":"ContainerStarted","Data":"4bdcc63e830c668157f4080e416db51ae79d8abfb508c653be5b1436216dc8a5"} Jan 07 03:51:06 crc kubenswrapper[4980]: I0107 03:51:06.216794 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=24.216772547 podStartE2EDuration="24.216772547s" podCreationTimestamp="2026-01-07 03:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:51:06.167403026 +0000 UTC m=+1112.733097761" watchObservedRunningTime="2026-01-07 03:51:06.216772547 +0000 UTC m=+1112.782467282" Jan 07 03:51:06 crc kubenswrapper[4980]: I0107 03:51:06.236539 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=24.236512453 podStartE2EDuration="24.236512453s" podCreationTimestamp="2026-01-07 03:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:51:06.199108565 +0000 UTC m=+1112.764803300" watchObservedRunningTime="2026-01-07 03:51:06.236512453 +0000 UTC m=+1112.802207188" Jan 07 03:51:06 crc kubenswrapper[4980]: I0107 03:51:06.243815 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" podStartSLOduration=3.24379953 podStartE2EDuration="3.24379953s" podCreationTimestamp="2026-01-07 03:51:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:51:06.221171644 +0000 UTC m=+1112.786866379" watchObservedRunningTime="2026-01-07 03:51:06.24379953 +0000 UTC m=+1112.809494265" Jan 07 03:51:06 crc kubenswrapper[4980]: I0107 03:51:06.372714 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5ffd847cb9-kf6tb"] Jan 07 03:51:06 crc kubenswrapper[4980]: I0107 03:51:06.542946 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:51:06 crc kubenswrapper[4980]: I0107 03:51:06.543002 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:51:06 crc kubenswrapper[4980]: I0107 03:51:06.685981 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:51:07 crc kubenswrapper[4980]: I0107 03:51:07.165111 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ffd847cb9-kf6tb" event={"ID":"6369f2cd-b133-42d0-bac5-f4790bf08ae5","Type":"ContainerStarted","Data":"faa1a7584940779bbebb60231e936dfbaa8dd786f7f7a209a83fe3c2a3c04447"} Jan 07 03:51:08 crc kubenswrapper[4980]: I0107 03:51:08.175105 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ffd847cb9-kf6tb" event={"ID":"6369f2cd-b133-42d0-bac5-f4790bf08ae5","Type":"ContainerStarted","Data":"c050a413b4a303e2eb8481b0e44e2e03d6c7c1b0b8fdc68795d6ee091b7ae9c0"} Jan 07 03:51:10 crc kubenswrapper[4980]: I0107 03:51:10.194474 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ffd847cb9-kf6tb" event={"ID":"6369f2cd-b133-42d0-bac5-f4790bf08ae5","Type":"ContainerStarted","Data":"57d9bf472024b7531dcd40d7984f66ea31cc60986ed9065d65a03fe81f3e9361"} Jan 07 03:51:10 crc kubenswrapper[4980]: I0107 03:51:10.194887 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:10 crc kubenswrapper[4980]: I0107 03:51:10.222574 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5ffd847cb9-kf6tb" podStartSLOduration=5.22253683 podStartE2EDuration="5.22253683s" podCreationTimestamp="2026-01-07 03:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:51:10.216697857 +0000 UTC m=+1116.782392592" watchObservedRunningTime="2026-01-07 03:51:10.22253683 +0000 UTC m=+1116.788231565" Jan 07 03:51:11 crc kubenswrapper[4980]: I0107 03:51:11.203532 4980 generic.go:334] "Generic (PLEG): container finished" podID="d40c84e1-0c50-4439-bfbe-469ac096cbea" containerID="015140773c7c52137c3f3ea9229344b379a27a4e74591ccaf71c5b0834e11658" exitCode=0 Jan 07 03:51:11 crc kubenswrapper[4980]: I0107 03:51:11.203598 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vxwsx" event={"ID":"d40c84e1-0c50-4439-bfbe-469ac096cbea","Type":"ContainerDied","Data":"015140773c7c52137c3f3ea9229344b379a27a4e74591ccaf71c5b0834e11658"} Jan 07 03:51:11 crc kubenswrapper[4980]: I0107 03:51:11.204958 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea978d21-b28c-4714-8f07-b70f84f0efa8" containerID="d28d5b27a74017cec149329a4e1880bc4320a466663b3ad333cf8d7fdfc3ddf4" exitCode=0 Jan 07 03:51:11 crc kubenswrapper[4980]: I0107 03:51:11.205851 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9f6jz" event={"ID":"ea978d21-b28c-4714-8f07-b70f84f0efa8","Type":"ContainerDied","Data":"d28d5b27a74017cec149329a4e1880bc4320a466663b3ad333cf8d7fdfc3ddf4"} Jan 07 03:51:12 crc kubenswrapper[4980]: I0107 03:51:12.410338 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:51:12 crc kubenswrapper[4980]: I0107 03:51:12.410636 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:51:12 crc kubenswrapper[4980]: I0107 03:51:12.435859 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-547b7ddd64-7hclw" podUID="a806c806-4d43-4a04-aefa-0544f2a5175f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 07 03:51:12 crc kubenswrapper[4980]: I0107 03:51:12.460783 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:51:12 crc kubenswrapper[4980]: I0107 03:51:12.461693 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:51:12 crc kubenswrapper[4980]: I0107 03:51:12.464460 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-565d4f6c4b-gj6mz" podUID="5d0304bc-69af-4a65-90e0-088a428990a1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 07 03:51:13 crc kubenswrapper[4980]: I0107 03:51:13.055715 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 07 03:51:13 crc kubenswrapper[4980]: I0107 03:51:13.056085 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 07 03:51:13 crc kubenswrapper[4980]: I0107 03:51:13.056101 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 07 03:51:13 crc kubenswrapper[4980]: I0107 03:51:13.056114 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 07 03:51:13 crc kubenswrapper[4980]: I0107 03:51:13.069705 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 07 03:51:13 crc kubenswrapper[4980]: I0107 03:51:13.069754 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 07 03:51:13 crc kubenswrapper[4980]: I0107 03:51:13.070223 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 07 03:51:13 crc kubenswrapper[4980]: I0107 03:51:13.070274 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 07 03:51:13 crc kubenswrapper[4980]: I0107 03:51:13.093813 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 07 03:51:13 crc kubenswrapper[4980]: I0107 03:51:13.106773 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 07 03:51:13 crc kubenswrapper[4980]: I0107 03:51:13.120096 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 07 03:51:13 crc kubenswrapper[4980]: I0107 03:51:13.145032 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 07 03:51:13 crc kubenswrapper[4980]: I0107 03:51:13.601482 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:13 crc kubenswrapper[4980]: I0107 03:51:13.682163 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qrxm6"] Jan 07 03:51:13 crc kubenswrapper[4980]: I0107 03:51:13.682406 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" podUID="9dc4048d-d6ae-4100-b135-19e957d54eb6" containerName="dnsmasq-dns" containerID="cri-o://d0363be642b9f8eb8a5626a9c1a6f14b3c7d6b196431e1e63297810b4ab2620e" gracePeriod=10 Jan 07 03:51:13 crc kubenswrapper[4980]: I0107 03:51:13.819965 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" podUID="9dc4048d-d6ae-4100-b135-19e957d54eb6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.236152 4980 generic.go:334] "Generic (PLEG): container finished" podID="9dc4048d-d6ae-4100-b135-19e957d54eb6" containerID="d0363be642b9f8eb8a5626a9c1a6f14b3c7d6b196431e1e63297810b4ab2620e" exitCode=0 Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.236969 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" event={"ID":"9dc4048d-d6ae-4100-b135-19e957d54eb6","Type":"ContainerDied","Data":"d0363be642b9f8eb8a5626a9c1a6f14b3c7d6b196431e1e63297810b4ab2620e"} Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.451309 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9f6jz" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.516941 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.556282 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ea978d21-b28c-4714-8f07-b70f84f0efa8-db-sync-config-data\") pod \"ea978d21-b28c-4714-8f07-b70f84f0efa8\" (UID: \"ea978d21-b28c-4714-8f07-b70f84f0efa8\") " Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.556717 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtw4z\" (UniqueName: \"kubernetes.io/projected/ea978d21-b28c-4714-8f07-b70f84f0efa8-kube-api-access-dtw4z\") pod \"ea978d21-b28c-4714-8f07-b70f84f0efa8\" (UID: \"ea978d21-b28c-4714-8f07-b70f84f0efa8\") " Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.556765 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-credential-keys\") pod \"d40c84e1-0c50-4439-bfbe-469ac096cbea\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.556863 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-fernet-keys\") pod \"d40c84e1-0c50-4439-bfbe-469ac096cbea\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.556960 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-combined-ca-bundle\") pod \"d40c84e1-0c50-4439-bfbe-469ac096cbea\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.557002 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea978d21-b28c-4714-8f07-b70f84f0efa8-combined-ca-bundle\") pod \"ea978d21-b28c-4714-8f07-b70f84f0efa8\" (UID: \"ea978d21-b28c-4714-8f07-b70f84f0efa8\") " Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.557023 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84zbc\" (UniqueName: \"kubernetes.io/projected/d40c84e1-0c50-4439-bfbe-469ac096cbea-kube-api-access-84zbc\") pod \"d40c84e1-0c50-4439-bfbe-469ac096cbea\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.557074 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-config-data\") pod \"d40c84e1-0c50-4439-bfbe-469ac096cbea\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.567307 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d40c84e1-0c50-4439-bfbe-469ac096cbea" (UID: "d40c84e1-0c50-4439-bfbe-469ac096cbea"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.576791 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d40c84e1-0c50-4439-bfbe-469ac096cbea" (UID: "d40c84e1-0c50-4439-bfbe-469ac096cbea"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.583918 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea978d21-b28c-4714-8f07-b70f84f0efa8-kube-api-access-dtw4z" (OuterVolumeSpecName: "kube-api-access-dtw4z") pod "ea978d21-b28c-4714-8f07-b70f84f0efa8" (UID: "ea978d21-b28c-4714-8f07-b70f84f0efa8"). InnerVolumeSpecName "kube-api-access-dtw4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.586695 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d40c84e1-0c50-4439-bfbe-469ac096cbea-kube-api-access-84zbc" (OuterVolumeSpecName: "kube-api-access-84zbc") pod "d40c84e1-0c50-4439-bfbe-469ac096cbea" (UID: "d40c84e1-0c50-4439-bfbe-469ac096cbea"). InnerVolumeSpecName "kube-api-access-84zbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.600082 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea978d21-b28c-4714-8f07-b70f84f0efa8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ea978d21-b28c-4714-8f07-b70f84f0efa8" (UID: "ea978d21-b28c-4714-8f07-b70f84f0efa8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.654705 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-config-data" (OuterVolumeSpecName: "config-data") pod "d40c84e1-0c50-4439-bfbe-469ac096cbea" (UID: "d40c84e1-0c50-4439-bfbe-469ac096cbea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.657780 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d40c84e1-0c50-4439-bfbe-469ac096cbea" (UID: "d40c84e1-0c50-4439-bfbe-469ac096cbea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.658457 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-scripts\") pod \"d40c84e1-0c50-4439-bfbe-469ac096cbea\" (UID: \"d40c84e1-0c50-4439-bfbe-469ac096cbea\") " Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.658811 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtw4z\" (UniqueName: \"kubernetes.io/projected/ea978d21-b28c-4714-8f07-b70f84f0efa8-kube-api-access-dtw4z\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.658825 4980 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.658834 4980 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.658844 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.658855 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84zbc\" (UniqueName: \"kubernetes.io/projected/d40c84e1-0c50-4439-bfbe-469ac096cbea-kube-api-access-84zbc\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.658863 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.658871 4980 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ea978d21-b28c-4714-8f07-b70f84f0efa8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.674967 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-scripts" (OuterVolumeSpecName: "scripts") pod "d40c84e1-0c50-4439-bfbe-469ac096cbea" (UID: "d40c84e1-0c50-4439-bfbe-469ac096cbea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.692712 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea978d21-b28c-4714-8f07-b70f84f0efa8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea978d21-b28c-4714-8f07-b70f84f0efa8" (UID: "ea978d21-b28c-4714-8f07-b70f84f0efa8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.770473 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea978d21-b28c-4714-8f07-b70f84f0efa8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.770509 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40c84e1-0c50-4439-bfbe-469ac096cbea-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.896760 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.975570 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-dns-swift-storage-0\") pod \"9dc4048d-d6ae-4100-b135-19e957d54eb6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.975750 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-config\") pod \"9dc4048d-d6ae-4100-b135-19e957d54eb6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.975884 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-ovsdbserver-nb\") pod \"9dc4048d-d6ae-4100-b135-19e957d54eb6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.975908 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-ovsdbserver-sb\") pod \"9dc4048d-d6ae-4100-b135-19e957d54eb6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.975965 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-dns-svc\") pod \"9dc4048d-d6ae-4100-b135-19e957d54eb6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.976044 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k28l9\" (UniqueName: \"kubernetes.io/projected/9dc4048d-d6ae-4100-b135-19e957d54eb6-kube-api-access-k28l9\") pod \"9dc4048d-d6ae-4100-b135-19e957d54eb6\" (UID: \"9dc4048d-d6ae-4100-b135-19e957d54eb6\") " Jan 07 03:51:14 crc kubenswrapper[4980]: I0107 03:51:14.989796 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dc4048d-d6ae-4100-b135-19e957d54eb6-kube-api-access-k28l9" (OuterVolumeSpecName: "kube-api-access-k28l9") pod "9dc4048d-d6ae-4100-b135-19e957d54eb6" (UID: "9dc4048d-d6ae-4100-b135-19e957d54eb6"). InnerVolumeSpecName "kube-api-access-k28l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.044736 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-config" (OuterVolumeSpecName: "config") pod "9dc4048d-d6ae-4100-b135-19e957d54eb6" (UID: "9dc4048d-d6ae-4100-b135-19e957d54eb6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.050914 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9dc4048d-d6ae-4100-b135-19e957d54eb6" (UID: "9dc4048d-d6ae-4100-b135-19e957d54eb6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.052978 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9dc4048d-d6ae-4100-b135-19e957d54eb6" (UID: "9dc4048d-d6ae-4100-b135-19e957d54eb6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.054767 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9dc4048d-d6ae-4100-b135-19e957d54eb6" (UID: "9dc4048d-d6ae-4100-b135-19e957d54eb6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.058153 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9dc4048d-d6ae-4100-b135-19e957d54eb6" (UID: "9dc4048d-d6ae-4100-b135-19e957d54eb6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.077708 4980 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.077760 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k28l9\" (UniqueName: \"kubernetes.io/projected/9dc4048d-d6ae-4100-b135-19e957d54eb6-kube-api-access-k28l9\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.077772 4980 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.077781 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.077791 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.077800 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9dc4048d-d6ae-4100-b135-19e957d54eb6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.256248 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bng92" event={"ID":"bc47b5ba-b1a8-4615-be80-db3ae1580399","Type":"ContainerStarted","Data":"6e0d55fabb87c0cf46c5fc766767b97312dbbf0f0f4ea538e1dad253737414be"} Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.258160 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9f6jz" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.258161 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9f6jz" event={"ID":"ea978d21-b28c-4714-8f07-b70f84f0efa8","Type":"ContainerDied","Data":"db977c9914a79697b1ce767bea82246f644d8ab1f8e1bfd1e2d05d96db0022b9"} Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.258962 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db977c9914a79697b1ce767bea82246f644d8ab1f8e1bfd1e2d05d96db0022b9" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.260907 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" event={"ID":"9dc4048d-d6ae-4100-b135-19e957d54eb6","Type":"ContainerDied","Data":"9e5f2199761f48e63eba80618ae06ffaa89b7e385b8232d479d73285c967e039"} Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.260962 4980 scope.go:117] "RemoveContainer" containerID="d0363be642b9f8eb8a5626a9c1a6f14b3c7d6b196431e1e63297810b4ab2620e" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.261055 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-qrxm6" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.282234 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"223a79c7-9460-4b16-a65d-90e2c0751dfa","Type":"ContainerStarted","Data":"b82f3d1bb2080ec8ea42e235419727baaf200a445ac848a5ba89f919bdb7d666"} Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.290268 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vxwsx" event={"ID":"d40c84e1-0c50-4439-bfbe-469ac096cbea","Type":"ContainerDied","Data":"9b8d8c6740883b4bb9116e8da654e7ab56992af2b288801a825ad60b5a56b059"} Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.290321 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b8d8c6740883b4bb9116e8da654e7ab56992af2b288801a825ad60b5a56b059" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.290430 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vxwsx" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.314207 4980 scope.go:117] "RemoveContainer" containerID="63784fa003b918f9cce095f896e230c7a044684ef1f6ce3e0238d37e15987cdf" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.381365 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-bng92" podStartSLOduration=2.6640869929999997 podStartE2EDuration="42.381339238s" podCreationTimestamp="2026-01-07 03:50:33 +0000 UTC" firstStartedPulling="2026-01-07 03:50:34.779206604 +0000 UTC m=+1081.344901339" lastFinishedPulling="2026-01-07 03:51:14.496458849 +0000 UTC m=+1121.062153584" observedRunningTime="2026-01-07 03:51:15.312378517 +0000 UTC m=+1121.878073252" watchObservedRunningTime="2026-01-07 03:51:15.381339238 +0000 UTC m=+1121.947033963" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.390700 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qrxm6"] Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.403091 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qrxm6"] Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.688104 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-558f69c5-b5wnd"] Jan 07 03:51:15 crc kubenswrapper[4980]: E0107 03:51:15.688464 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc4048d-d6ae-4100-b135-19e957d54eb6" containerName="init" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.688477 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc4048d-d6ae-4100-b135-19e957d54eb6" containerName="init" Jan 07 03:51:15 crc kubenswrapper[4980]: E0107 03:51:15.688493 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40c84e1-0c50-4439-bfbe-469ac096cbea" containerName="keystone-bootstrap" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.688500 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40c84e1-0c50-4439-bfbe-469ac096cbea" containerName="keystone-bootstrap" Jan 07 03:51:15 crc kubenswrapper[4980]: E0107 03:51:15.688516 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea978d21-b28c-4714-8f07-b70f84f0efa8" containerName="barbican-db-sync" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.688522 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea978d21-b28c-4714-8f07-b70f84f0efa8" containerName="barbican-db-sync" Jan 07 03:51:15 crc kubenswrapper[4980]: E0107 03:51:15.688533 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc4048d-d6ae-4100-b135-19e957d54eb6" containerName="dnsmasq-dns" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.688540 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc4048d-d6ae-4100-b135-19e957d54eb6" containerName="dnsmasq-dns" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.688724 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40c84e1-0c50-4439-bfbe-469ac096cbea" containerName="keystone-bootstrap" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.688769 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea978d21-b28c-4714-8f07-b70f84f0efa8" containerName="barbican-db-sync" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.688787 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc4048d-d6ae-4100-b135-19e957d54eb6" containerName="dnsmasq-dns" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.697349 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-558f69c5-b5wnd" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.714708 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.715534 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.715753 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mlwsp" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.722804 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-78f9866cd4-xmbrp"] Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.724353 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.727858 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.739786 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-558f69c5-b5wnd"] Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.790882 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dc4048d-d6ae-4100-b135-19e957d54eb6" path="/var/lib/kubelet/pods/9dc4048d-d6ae-4100-b135-19e957d54eb6/volumes" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.793576 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.793602 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-85979fc5c6-rh7l6"] Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.795001 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.795112 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.810048 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.810262 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.810376 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.810473 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.810609 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.811223 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fsjw7" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.817877 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d2661ce-3148-48ac-a1b2-af154d207c5a-config-data-custom\") pod \"barbican-worker-558f69c5-b5wnd\" (UID: \"2d2661ce-3148-48ac-a1b2-af154d207c5a\") " pod="openstack/barbican-worker-558f69c5-b5wnd" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.817922 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2661ce-3148-48ac-a1b2-af154d207c5a-config-data\") pod \"barbican-worker-558f69c5-b5wnd\" (UID: \"2d2661ce-3148-48ac-a1b2-af154d207c5a\") " pod="openstack/barbican-worker-558f69c5-b5wnd" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.817995 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2661ce-3148-48ac-a1b2-af154d207c5a-combined-ca-bundle\") pod \"barbican-worker-558f69c5-b5wnd\" (UID: \"2d2661ce-3148-48ac-a1b2-af154d207c5a\") " pod="openstack/barbican-worker-558f69c5-b5wnd" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.818019 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmkm7\" (UniqueName: \"kubernetes.io/projected/2d2661ce-3148-48ac-a1b2-af154d207c5a-kube-api-access-fmkm7\") pod \"barbican-worker-558f69c5-b5wnd\" (UID: \"2d2661ce-3148-48ac-a1b2-af154d207c5a\") " pod="openstack/barbican-worker-558f69c5-b5wnd" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.818041 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d2661ce-3148-48ac-a1b2-af154d207c5a-logs\") pod \"barbican-worker-558f69c5-b5wnd\" (UID: \"2d2661ce-3148-48ac-a1b2-af154d207c5a\") " pod="openstack/barbican-worker-558f69c5-b5wnd" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.829925 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-78f9866cd4-xmbrp"] Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.855937 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-85979fc5c6-rh7l6"] Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.909615 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-9vkwn"] Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.911374 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.914151 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-9vkwn"] Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.921192 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-public-tls-certs\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.921234 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k87c\" (UniqueName: \"kubernetes.io/projected/eb08ed0c-20e9-44f7-9472-9d1899a51d32-kube-api-access-4k87c\") pod \"barbican-keystone-listener-78f9866cd4-xmbrp\" (UID: \"eb08ed0c-20e9-44f7-9472-9d1899a51d32\") " pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.921277 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-config-data\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.921352 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-internal-tls-certs\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.921397 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d2661ce-3148-48ac-a1b2-af154d207c5a-config-data-custom\") pod \"barbican-worker-558f69c5-b5wnd\" (UID: \"2d2661ce-3148-48ac-a1b2-af154d207c5a\") " pod="openstack/barbican-worker-558f69c5-b5wnd" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.921431 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2661ce-3148-48ac-a1b2-af154d207c5a-config-data\") pod \"barbican-worker-558f69c5-b5wnd\" (UID: \"2d2661ce-3148-48ac-a1b2-af154d207c5a\") " pod="openstack/barbican-worker-558f69c5-b5wnd" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.921504 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-scripts\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.921582 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb08ed0c-20e9-44f7-9472-9d1899a51d32-config-data\") pod \"barbican-keystone-listener-78f9866cd4-xmbrp\" (UID: \"eb08ed0c-20e9-44f7-9472-9d1899a51d32\") " pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.921604 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb08ed0c-20e9-44f7-9472-9d1899a51d32-logs\") pod \"barbican-keystone-listener-78f9866cd4-xmbrp\" (UID: \"eb08ed0c-20e9-44f7-9472-9d1899a51d32\") " pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.921632 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-combined-ca-bundle\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.921673 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-credential-keys\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.921695 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-fernet-keys\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.921747 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxq2p\" (UniqueName: \"kubernetes.io/projected/ede07643-4b02-490a-a73d-e6c783a138e6-kube-api-access-wxq2p\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.921768 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2661ce-3148-48ac-a1b2-af154d207c5a-combined-ca-bundle\") pod \"barbican-worker-558f69c5-b5wnd\" (UID: \"2d2661ce-3148-48ac-a1b2-af154d207c5a\") " pod="openstack/barbican-worker-558f69c5-b5wnd" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.921805 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmkm7\" (UniqueName: \"kubernetes.io/projected/2d2661ce-3148-48ac-a1b2-af154d207c5a-kube-api-access-fmkm7\") pod \"barbican-worker-558f69c5-b5wnd\" (UID: \"2d2661ce-3148-48ac-a1b2-af154d207c5a\") " pod="openstack/barbican-worker-558f69c5-b5wnd" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.921854 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb08ed0c-20e9-44f7-9472-9d1899a51d32-config-data-custom\") pod \"barbican-keystone-listener-78f9866cd4-xmbrp\" (UID: \"eb08ed0c-20e9-44f7-9472-9d1899a51d32\") " pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.921872 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d2661ce-3148-48ac-a1b2-af154d207c5a-logs\") pod \"barbican-worker-558f69c5-b5wnd\" (UID: \"2d2661ce-3148-48ac-a1b2-af154d207c5a\") " pod="openstack/barbican-worker-558f69c5-b5wnd" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.921889 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb08ed0c-20e9-44f7-9472-9d1899a51d32-combined-ca-bundle\") pod \"barbican-keystone-listener-78f9866cd4-xmbrp\" (UID: \"eb08ed0c-20e9-44f7-9472-9d1899a51d32\") " pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.933444 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d2661ce-3148-48ac-a1b2-af154d207c5a-config-data-custom\") pod \"barbican-worker-558f69c5-b5wnd\" (UID: \"2d2661ce-3148-48ac-a1b2-af154d207c5a\") " pod="openstack/barbican-worker-558f69c5-b5wnd" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.934277 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d2661ce-3148-48ac-a1b2-af154d207c5a-logs\") pod \"barbican-worker-558f69c5-b5wnd\" (UID: \"2d2661ce-3148-48ac-a1b2-af154d207c5a\") " pod="openstack/barbican-worker-558f69c5-b5wnd" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.939328 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2661ce-3148-48ac-a1b2-af154d207c5a-config-data\") pod \"barbican-worker-558f69c5-b5wnd\" (UID: \"2d2661ce-3148-48ac-a1b2-af154d207c5a\") " pod="openstack/barbican-worker-558f69c5-b5wnd" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.940228 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2661ce-3148-48ac-a1b2-af154d207c5a-combined-ca-bundle\") pod \"barbican-worker-558f69c5-b5wnd\" (UID: \"2d2661ce-3148-48ac-a1b2-af154d207c5a\") " pod="openstack/barbican-worker-558f69c5-b5wnd" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.954895 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmkm7\" (UniqueName: \"kubernetes.io/projected/2d2661ce-3148-48ac-a1b2-af154d207c5a-kube-api-access-fmkm7\") pod \"barbican-worker-558f69c5-b5wnd\" (UID: \"2d2661ce-3148-48ac-a1b2-af154d207c5a\") " pod="openstack/barbican-worker-558f69c5-b5wnd" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.986621 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-76477f569b-rknll"] Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.988434 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:15 crc kubenswrapper[4980]: I0107 03:51:15.994921 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.021114 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76477f569b-rknll"] Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024118 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db5j7\" (UniqueName: \"kubernetes.io/projected/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-kube-api-access-db5j7\") pod \"dnsmasq-dns-85ff748b95-9vkwn\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024173 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-scripts\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024197 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-dns-svc\") pod \"dnsmasq-dns-85ff748b95-9vkwn\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024217 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-config\") pod \"dnsmasq-dns-85ff748b95-9vkwn\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024242 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb08ed0c-20e9-44f7-9472-9d1899a51d32-config-data\") pod \"barbican-keystone-listener-78f9866cd4-xmbrp\" (UID: \"eb08ed0c-20e9-44f7-9472-9d1899a51d32\") " pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024258 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-9vkwn\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024278 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-9vkwn\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024298 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb08ed0c-20e9-44f7-9472-9d1899a51d32-logs\") pod \"barbican-keystone-listener-78f9866cd4-xmbrp\" (UID: \"eb08ed0c-20e9-44f7-9472-9d1899a51d32\") " pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024315 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-combined-ca-bundle\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024341 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-credential-keys\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024365 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-fernet-keys\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024381 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-9vkwn\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024405 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxq2p\" (UniqueName: \"kubernetes.io/projected/ede07643-4b02-490a-a73d-e6c783a138e6-kube-api-access-wxq2p\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024438 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb08ed0c-20e9-44f7-9472-9d1899a51d32-config-data-custom\") pod \"barbican-keystone-listener-78f9866cd4-xmbrp\" (UID: \"eb08ed0c-20e9-44f7-9472-9d1899a51d32\") " pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024455 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb08ed0c-20e9-44f7-9472-9d1899a51d32-combined-ca-bundle\") pod \"barbican-keystone-listener-78f9866cd4-xmbrp\" (UID: \"eb08ed0c-20e9-44f7-9472-9d1899a51d32\") " pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024474 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-public-tls-certs\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024495 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k87c\" (UniqueName: \"kubernetes.io/projected/eb08ed0c-20e9-44f7-9472-9d1899a51d32-kube-api-access-4k87c\") pod \"barbican-keystone-listener-78f9866cd4-xmbrp\" (UID: \"eb08ed0c-20e9-44f7-9472-9d1899a51d32\") " pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024523 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-config-data\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.024587 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-internal-tls-certs\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.031786 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb08ed0c-20e9-44f7-9472-9d1899a51d32-logs\") pod \"barbican-keystone-listener-78f9866cd4-xmbrp\" (UID: \"eb08ed0c-20e9-44f7-9472-9d1899a51d32\") " pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.034161 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-credential-keys\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.038158 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-internal-tls-certs\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.038805 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-public-tls-certs\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.041005 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb08ed0c-20e9-44f7-9472-9d1899a51d32-combined-ca-bundle\") pod \"barbican-keystone-listener-78f9866cd4-xmbrp\" (UID: \"eb08ed0c-20e9-44f7-9472-9d1899a51d32\") " pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.041271 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-combined-ca-bundle\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.042103 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb08ed0c-20e9-44f7-9472-9d1899a51d32-config-data\") pod \"barbican-keystone-listener-78f9866cd4-xmbrp\" (UID: \"eb08ed0c-20e9-44f7-9472-9d1899a51d32\") " pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.059928 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-fernet-keys\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.060190 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-config-data\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.060227 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ede07643-4b02-490a-a73d-e6c783a138e6-scripts\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.060761 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb08ed0c-20e9-44f7-9472-9d1899a51d32-config-data-custom\") pod \"barbican-keystone-listener-78f9866cd4-xmbrp\" (UID: \"eb08ed0c-20e9-44f7-9472-9d1899a51d32\") " pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.070100 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k87c\" (UniqueName: \"kubernetes.io/projected/eb08ed0c-20e9-44f7-9472-9d1899a51d32-kube-api-access-4k87c\") pod \"barbican-keystone-listener-78f9866cd4-xmbrp\" (UID: \"eb08ed0c-20e9-44f7-9472-9d1899a51d32\") " pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.070642 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxq2p\" (UniqueName: \"kubernetes.io/projected/ede07643-4b02-490a-a73d-e6c783a138e6-kube-api-access-wxq2p\") pod \"keystone-85979fc5c6-rh7l6\" (UID: \"ede07643-4b02-490a-a73d-e6c783a138e6\") " pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.095873 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-558f69c5-b5wnd" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.126653 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db5j7\" (UniqueName: \"kubernetes.io/projected/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-kube-api-access-db5j7\") pod \"dnsmasq-dns-85ff748b95-9vkwn\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.126696 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-dns-svc\") pod \"dnsmasq-dns-85ff748b95-9vkwn\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.126712 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-config\") pod \"dnsmasq-dns-85ff748b95-9vkwn\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.126738 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-9vkwn\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.126766 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-9vkwn\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.126803 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1311b795-d460-4939-8ba0-73e023eb9940-combined-ca-bundle\") pod \"barbican-api-76477f569b-rknll\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.126838 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-9vkwn\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.126893 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1311b795-d460-4939-8ba0-73e023eb9940-config-data-custom\") pod \"barbican-api-76477f569b-rknll\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.126924 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1311b795-d460-4939-8ba0-73e023eb9940-config-data\") pod \"barbican-api-76477f569b-rknll\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.126954 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt9qx\" (UniqueName: \"kubernetes.io/projected/1311b795-d460-4939-8ba0-73e023eb9940-kube-api-access-dt9qx\") pod \"barbican-api-76477f569b-rknll\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.126999 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1311b795-d460-4939-8ba0-73e023eb9940-logs\") pod \"barbican-api-76477f569b-rknll\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.127721 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-config\") pod \"dnsmasq-dns-85ff748b95-9vkwn\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.128099 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-9vkwn\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.128647 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-dns-svc\") pod \"dnsmasq-dns-85ff748b95-9vkwn\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.129732 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-9vkwn\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.133912 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-9vkwn\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.137540 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.155437 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db5j7\" (UniqueName: \"kubernetes.io/projected/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-kube-api-access-db5j7\") pod \"dnsmasq-dns-85ff748b95-9vkwn\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.174034 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.175126 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.175233 4980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.229743 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1311b795-d460-4939-8ba0-73e023eb9940-config-data\") pod \"barbican-api-76477f569b-rknll\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.229814 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt9qx\" (UniqueName: \"kubernetes.io/projected/1311b795-d460-4939-8ba0-73e023eb9940-kube-api-access-dt9qx\") pod \"barbican-api-76477f569b-rknll\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.229884 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1311b795-d460-4939-8ba0-73e023eb9940-logs\") pod \"barbican-api-76477f569b-rknll\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.229995 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1311b795-d460-4939-8ba0-73e023eb9940-combined-ca-bundle\") pod \"barbican-api-76477f569b-rknll\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.230103 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1311b795-d460-4939-8ba0-73e023eb9940-config-data-custom\") pod \"barbican-api-76477f569b-rknll\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.233464 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1311b795-d460-4939-8ba0-73e023eb9940-logs\") pod \"barbican-api-76477f569b-rknll\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.236522 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1311b795-d460-4939-8ba0-73e023eb9940-config-data-custom\") pod \"barbican-api-76477f569b-rknll\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.237675 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1311b795-d460-4939-8ba0-73e023eb9940-config-data\") pod \"barbican-api-76477f569b-rknll\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.242348 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1311b795-d460-4939-8ba0-73e023eb9940-combined-ca-bundle\") pod \"barbican-api-76477f569b-rknll\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.289542 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt9qx\" (UniqueName: \"kubernetes.io/projected/1311b795-d460-4939-8ba0-73e023eb9940-kube-api-access-dt9qx\") pod \"barbican-api-76477f569b-rknll\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.395969 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.404953 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.449676 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.807025 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-558f69c5-b5wnd"] Jan 07 03:51:16 crc kubenswrapper[4980]: I0107 03:51:16.823805 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-85979fc5c6-rh7l6"] Jan 07 03:51:17 crc kubenswrapper[4980]: I0107 03:51:17.035781 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-78f9866cd4-xmbrp"] Jan 07 03:51:17 crc kubenswrapper[4980]: I0107 03:51:17.175772 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76477f569b-rknll"] Jan 07 03:51:17 crc kubenswrapper[4980]: I0107 03:51:17.273300 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-9vkwn"] Jan 07 03:51:17 crc kubenswrapper[4980]: I0107 03:51:17.329442 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" event={"ID":"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b","Type":"ContainerStarted","Data":"1136700aa969fc31fdfeaee5518c50abb54e6b49f17a5c2526a5c9bf0aa6fb02"} Jan 07 03:51:17 crc kubenswrapper[4980]: I0107 03:51:17.349065 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" event={"ID":"eb08ed0c-20e9-44f7-9472-9d1899a51d32","Type":"ContainerStarted","Data":"7b5f6e048ef426274bc5bafe49a00b5b62dd9d9c35e505fdc420e56be1a34673"} Jan 07 03:51:17 crc kubenswrapper[4980]: I0107 03:51:17.352673 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-558f69c5-b5wnd" event={"ID":"2d2661ce-3148-48ac-a1b2-af154d207c5a","Type":"ContainerStarted","Data":"5993b39269b9827339a76d25b0cfeabac1a9aea7ad74a06dc6857e607d575b94"} Jan 07 03:51:17 crc kubenswrapper[4980]: I0107 03:51:17.354979 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76477f569b-rknll" event={"ID":"1311b795-d460-4939-8ba0-73e023eb9940","Type":"ContainerStarted","Data":"e1b67aa4cabe08f066f474729047395a246446dd5140e81462594c0760c28896"} Jan 07 03:51:17 crc kubenswrapper[4980]: I0107 03:51:17.361944 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-85979fc5c6-rh7l6" event={"ID":"ede07643-4b02-490a-a73d-e6c783a138e6","Type":"ContainerStarted","Data":"c6a4ee9b96ca91e92de84bc720368e14587ca4adcdae751d400825682b160f90"} Jan 07 03:51:17 crc kubenswrapper[4980]: I0107 03:51:17.362387 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-85979fc5c6-rh7l6" event={"ID":"ede07643-4b02-490a-a73d-e6c783a138e6","Type":"ContainerStarted","Data":"dd7fb6c3697f180401f6e67d0e0b66a55b1c23a3cbfb5776b202dbe88906a225"} Jan 07 03:51:17 crc kubenswrapper[4980]: I0107 03:51:17.362519 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:17 crc kubenswrapper[4980]: I0107 03:51:17.371685 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b9flf" event={"ID":"2df19fa0-d6ce-4539-8f87-9d6935314e82","Type":"ContainerStarted","Data":"ca79779095f3d5c4e3cee36263df16743fdac1a089634cdb6a5103e151bf4164"} Jan 07 03:51:17 crc kubenswrapper[4980]: I0107 03:51:17.382350 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-85979fc5c6-rh7l6" podStartSLOduration=2.38233335 podStartE2EDuration="2.38233335s" podCreationTimestamp="2026-01-07 03:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:51:17.381204276 +0000 UTC m=+1123.946899011" watchObservedRunningTime="2026-01-07 03:51:17.38233335 +0000 UTC m=+1123.948028075" Jan 07 03:51:17 crc kubenswrapper[4980]: I0107 03:51:17.401289 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-b9flf" podStartSLOduration=4.945863736 podStartE2EDuration="45.401270392s" podCreationTimestamp="2026-01-07 03:50:32 +0000 UTC" firstStartedPulling="2026-01-07 03:50:34.778918125 +0000 UTC m=+1081.344612860" lastFinishedPulling="2026-01-07 03:51:15.234324781 +0000 UTC m=+1121.800019516" observedRunningTime="2026-01-07 03:51:17.400595901 +0000 UTC m=+1123.966290636" watchObservedRunningTime="2026-01-07 03:51:17.401270392 +0000 UTC m=+1123.966965127" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.393256 4980 generic.go:334] "Generic (PLEG): container finished" podID="bc47b5ba-b1a8-4615-be80-db3ae1580399" containerID="6e0d55fabb87c0cf46c5fc766767b97312dbbf0f0f4ea538e1dad253737414be" exitCode=0 Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.393681 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bng92" event={"ID":"bc47b5ba-b1a8-4615-be80-db3ae1580399","Type":"ContainerDied","Data":"6e0d55fabb87c0cf46c5fc766767b97312dbbf0f0f4ea538e1dad253737414be"} Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.427816 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76477f569b-rknll" event={"ID":"1311b795-d460-4939-8ba0-73e023eb9940","Type":"ContainerStarted","Data":"820afbd26c93cfd13e5a0ceaa61e7b597b55d586b7c8edde6a0aaca5d6833add"} Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.427866 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76477f569b-rknll" event={"ID":"1311b795-d460-4939-8ba0-73e023eb9940","Type":"ContainerStarted","Data":"0e86a2e894b7bfba2df7a16643e45d451153e319d788ef70232eafdbe5784175"} Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.429468 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.429505 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.477420 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b" containerID="9c25a55485b8782ca9db2b3b34ed480c0b8b71457dfed49347997f1804899fb3" exitCode=0 Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.477603 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" event={"ID":"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b","Type":"ContainerDied","Data":"9c25a55485b8782ca9db2b3b34ed480c0b8b71457dfed49347997f1804899fb3"} Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.491575 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6f67c95874-pm99w"] Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.493156 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.494797 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.502386 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-76477f569b-rknll" podStartSLOduration=3.502362926 podStartE2EDuration="3.502362926s" podCreationTimestamp="2026-01-07 03:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:51:18.479000898 +0000 UTC m=+1125.044695633" watchObservedRunningTime="2026-01-07 03:51:18.502362926 +0000 UTC m=+1125.068057661" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.531100 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.547732 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f67c95874-pm99w"] Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.596710 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc960852-4c05-4805-8251-8336bb022087-public-tls-certs\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.596875 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc960852-4c05-4805-8251-8336bb022087-combined-ca-bundle\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.596937 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vfvd\" (UniqueName: \"kubernetes.io/projected/bc960852-4c05-4805-8251-8336bb022087-kube-api-access-7vfvd\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.596984 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc960852-4c05-4805-8251-8336bb022087-logs\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.597015 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc960852-4c05-4805-8251-8336bb022087-config-data\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.597032 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc960852-4c05-4805-8251-8336bb022087-internal-tls-certs\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.597064 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc960852-4c05-4805-8251-8336bb022087-config-data-custom\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.700429 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc960852-4c05-4805-8251-8336bb022087-config-data-custom\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.700505 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc960852-4c05-4805-8251-8336bb022087-public-tls-certs\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.700596 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc960852-4c05-4805-8251-8336bb022087-combined-ca-bundle\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.700636 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vfvd\" (UniqueName: \"kubernetes.io/projected/bc960852-4c05-4805-8251-8336bb022087-kube-api-access-7vfvd\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.700677 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc960852-4c05-4805-8251-8336bb022087-logs\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.700707 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc960852-4c05-4805-8251-8336bb022087-config-data\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.700722 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc960852-4c05-4805-8251-8336bb022087-internal-tls-certs\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.705099 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc960852-4c05-4805-8251-8336bb022087-logs\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.705915 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc960852-4c05-4805-8251-8336bb022087-config-data-custom\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.708079 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc960852-4c05-4805-8251-8336bb022087-combined-ca-bundle\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.720210 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc960852-4c05-4805-8251-8336bb022087-internal-tls-certs\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.720694 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc960852-4c05-4805-8251-8336bb022087-config-data\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.722018 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc960852-4c05-4805-8251-8336bb022087-public-tls-certs\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.723046 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vfvd\" (UniqueName: \"kubernetes.io/projected/bc960852-4c05-4805-8251-8336bb022087-kube-api-access-7vfvd\") pod \"barbican-api-6f67c95874-pm99w\" (UID: \"bc960852-4c05-4805-8251-8336bb022087\") " pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:18 crc kubenswrapper[4980]: I0107 03:51:18.981755 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:19 crc kubenswrapper[4980]: I0107 03:51:19.567096 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" event={"ID":"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b","Type":"ContainerStarted","Data":"f2405c9120641f64f847bd724d469d4e8bb14fdd74f45bc0f05ef7a6ffa731a2"} Jan 07 03:51:19 crc kubenswrapper[4980]: I0107 03:51:19.567717 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:19 crc kubenswrapper[4980]: I0107 03:51:19.588874 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f67c95874-pm99w"] Jan 07 03:51:19 crc kubenswrapper[4980]: I0107 03:51:19.590078 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" podStartSLOduration=4.590057704 podStartE2EDuration="4.590057704s" podCreationTimestamp="2026-01-07 03:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:51:19.588401351 +0000 UTC m=+1126.154096086" watchObservedRunningTime="2026-01-07 03:51:19.590057704 +0000 UTC m=+1126.155752429" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.124478 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bng92" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.239812 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc47b5ba-b1a8-4615-be80-db3ae1580399-config-data\") pod \"bc47b5ba-b1a8-4615-be80-db3ae1580399\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.239883 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwjmv\" (UniqueName: \"kubernetes.io/projected/bc47b5ba-b1a8-4615-be80-db3ae1580399-kube-api-access-gwjmv\") pod \"bc47b5ba-b1a8-4615-be80-db3ae1580399\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.239954 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc47b5ba-b1a8-4615-be80-db3ae1580399-scripts\") pod \"bc47b5ba-b1a8-4615-be80-db3ae1580399\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.240024 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc47b5ba-b1a8-4615-be80-db3ae1580399-combined-ca-bundle\") pod \"bc47b5ba-b1a8-4615-be80-db3ae1580399\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.240739 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc47b5ba-b1a8-4615-be80-db3ae1580399-logs\") pod \"bc47b5ba-b1a8-4615-be80-db3ae1580399\" (UID: \"bc47b5ba-b1a8-4615-be80-db3ae1580399\") " Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.241002 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc47b5ba-b1a8-4615-be80-db3ae1580399-logs" (OuterVolumeSpecName: "logs") pod "bc47b5ba-b1a8-4615-be80-db3ae1580399" (UID: "bc47b5ba-b1a8-4615-be80-db3ae1580399"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.241249 4980 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc47b5ba-b1a8-4615-be80-db3ae1580399-logs\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.246101 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc47b5ba-b1a8-4615-be80-db3ae1580399-scripts" (OuterVolumeSpecName: "scripts") pod "bc47b5ba-b1a8-4615-be80-db3ae1580399" (UID: "bc47b5ba-b1a8-4615-be80-db3ae1580399"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.247935 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc47b5ba-b1a8-4615-be80-db3ae1580399-kube-api-access-gwjmv" (OuterVolumeSpecName: "kube-api-access-gwjmv") pod "bc47b5ba-b1a8-4615-be80-db3ae1580399" (UID: "bc47b5ba-b1a8-4615-be80-db3ae1580399"). InnerVolumeSpecName "kube-api-access-gwjmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.271982 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc47b5ba-b1a8-4615-be80-db3ae1580399-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc47b5ba-b1a8-4615-be80-db3ae1580399" (UID: "bc47b5ba-b1a8-4615-be80-db3ae1580399"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.276266 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc47b5ba-b1a8-4615-be80-db3ae1580399-config-data" (OuterVolumeSpecName: "config-data") pod "bc47b5ba-b1a8-4615-be80-db3ae1580399" (UID: "bc47b5ba-b1a8-4615-be80-db3ae1580399"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.343062 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc47b5ba-b1a8-4615-be80-db3ae1580399-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.343097 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc47b5ba-b1a8-4615-be80-db3ae1580399-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.343110 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc47b5ba-b1a8-4615-be80-db3ae1580399-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.343118 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwjmv\" (UniqueName: \"kubernetes.io/projected/bc47b5ba-b1a8-4615-be80-db3ae1580399-kube-api-access-gwjmv\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.497964 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7f54697b86-x8z4v"] Jan 07 03:51:20 crc kubenswrapper[4980]: E0107 03:51:20.498608 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc47b5ba-b1a8-4615-be80-db3ae1580399" containerName="placement-db-sync" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.498624 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc47b5ba-b1a8-4615-be80-db3ae1580399" containerName="placement-db-sync" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.502840 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc47b5ba-b1a8-4615-be80-db3ae1580399" containerName="placement-db-sync" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.503914 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.508983 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.509190 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.518601 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f54697b86-x8z4v"] Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.546234 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vjdj\" (UniqueName: \"kubernetes.io/projected/0d68262c-96ba-42af-8b46-f13aa424ba0d-kube-api-access-5vjdj\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.546285 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d68262c-96ba-42af-8b46-f13aa424ba0d-scripts\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.546361 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d68262c-96ba-42af-8b46-f13aa424ba0d-public-tls-certs\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.546385 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d68262c-96ba-42af-8b46-f13aa424ba0d-logs\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.546417 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d68262c-96ba-42af-8b46-f13aa424ba0d-internal-tls-certs\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.546435 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d68262c-96ba-42af-8b46-f13aa424ba0d-combined-ca-bundle\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.546473 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d68262c-96ba-42af-8b46-f13aa424ba0d-config-data\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.577171 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bng92" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.578079 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bng92" event={"ID":"bc47b5ba-b1a8-4615-be80-db3ae1580399","Type":"ContainerDied","Data":"95139a78aeceadaa609ab1e1e56221735d09977cc83050ffd3b25146056bdabd"} Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.578109 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95139a78aeceadaa609ab1e1e56221735d09977cc83050ffd3b25146056bdabd" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.582103 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f67c95874-pm99w" event={"ID":"bc960852-4c05-4805-8251-8336bb022087","Type":"ContainerStarted","Data":"2f20f38174fb8867de8d19a1fc2d18feb4d17ce2d2fd00068f46f77b07b68378"} Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.648906 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d68262c-96ba-42af-8b46-f13aa424ba0d-internal-tls-certs\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.648991 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d68262c-96ba-42af-8b46-f13aa424ba0d-combined-ca-bundle\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.650369 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d68262c-96ba-42af-8b46-f13aa424ba0d-config-data\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.650635 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vjdj\" (UniqueName: \"kubernetes.io/projected/0d68262c-96ba-42af-8b46-f13aa424ba0d-kube-api-access-5vjdj\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.652800 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d68262c-96ba-42af-8b46-f13aa424ba0d-scripts\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.653184 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d68262c-96ba-42af-8b46-f13aa424ba0d-public-tls-certs\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.653300 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d68262c-96ba-42af-8b46-f13aa424ba0d-logs\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.655367 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d68262c-96ba-42af-8b46-f13aa424ba0d-logs\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.657289 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d68262c-96ba-42af-8b46-f13aa424ba0d-scripts\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.658363 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d68262c-96ba-42af-8b46-f13aa424ba0d-internal-tls-certs\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.659134 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d68262c-96ba-42af-8b46-f13aa424ba0d-config-data\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.659494 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d68262c-96ba-42af-8b46-f13aa424ba0d-public-tls-certs\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.661700 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d68262c-96ba-42af-8b46-f13aa424ba0d-combined-ca-bundle\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.689937 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vjdj\" (UniqueName: \"kubernetes.io/projected/0d68262c-96ba-42af-8b46-f13aa424ba0d-kube-api-access-5vjdj\") pod \"placement-7f54697b86-x8z4v\" (UID: \"0d68262c-96ba-42af-8b46-f13aa424ba0d\") " pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:20 crc kubenswrapper[4980]: I0107 03:51:20.842208 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:21 crc kubenswrapper[4980]: I0107 03:51:21.340524 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f54697b86-x8z4v"] Jan 07 03:51:21 crc kubenswrapper[4980]: I0107 03:51:21.609755 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f54697b86-x8z4v" event={"ID":"0d68262c-96ba-42af-8b46-f13aa424ba0d","Type":"ContainerStarted","Data":"a13a70a8ab2a970e61e9284821c7c9c081434304d07d09a5cade126a43dbe458"} Jan 07 03:51:21 crc kubenswrapper[4980]: I0107 03:51:21.615393 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" event={"ID":"eb08ed0c-20e9-44f7-9472-9d1899a51d32","Type":"ContainerStarted","Data":"ccb29eff85002fe29080c347cedf1f4705e75d60fd11429bafc6cd312c44ecef"} Jan 07 03:51:21 crc kubenswrapper[4980]: I0107 03:51:21.615441 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" event={"ID":"eb08ed0c-20e9-44f7-9472-9d1899a51d32","Type":"ContainerStarted","Data":"59825ef4f1923c612504408c8f11cc08cf075fbf7fe58d4e4f1eb51e43785045"} Jan 07 03:51:21 crc kubenswrapper[4980]: I0107 03:51:21.617934 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f67c95874-pm99w" event={"ID":"bc960852-4c05-4805-8251-8336bb022087","Type":"ContainerStarted","Data":"4622996ed58ad7b4216731207e2a0bea9bc27a39240a16d89ecc74414d9d781f"} Jan 07 03:51:21 crc kubenswrapper[4980]: I0107 03:51:21.617975 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f67c95874-pm99w" event={"ID":"bc960852-4c05-4805-8251-8336bb022087","Type":"ContainerStarted","Data":"0f5796d63040039a04755e31fb9f1af45b0b7732efffaa81929c662bef19225a"} Jan 07 03:51:21 crc kubenswrapper[4980]: I0107 03:51:21.618822 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:21 crc kubenswrapper[4980]: I0107 03:51:21.618851 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:21 crc kubenswrapper[4980]: I0107 03:51:21.626644 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-558f69c5-b5wnd" event={"ID":"2d2661ce-3148-48ac-a1b2-af154d207c5a","Type":"ContainerStarted","Data":"9909f5f7a2c2fd7b6837d321072867a98f51d9f39e001dbd6455b26cf941e0ea"} Jan 07 03:51:21 crc kubenswrapper[4980]: I0107 03:51:21.626682 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-558f69c5-b5wnd" event={"ID":"2d2661ce-3148-48ac-a1b2-af154d207c5a","Type":"ContainerStarted","Data":"6aa076f4c9405613f1aa684b8920e093b4f3f2d8b6a0a39f3f13eafad0b6ab00"} Jan 07 03:51:21 crc kubenswrapper[4980]: I0107 03:51:21.633132 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-78f9866cd4-xmbrp" podStartSLOduration=3.05028495 podStartE2EDuration="6.633116158s" podCreationTimestamp="2026-01-07 03:51:15 +0000 UTC" firstStartedPulling="2026-01-07 03:51:17.029872413 +0000 UTC m=+1123.595567148" lastFinishedPulling="2026-01-07 03:51:20.612703621 +0000 UTC m=+1127.178398356" observedRunningTime="2026-01-07 03:51:21.632186729 +0000 UTC m=+1128.197881464" watchObservedRunningTime="2026-01-07 03:51:21.633116158 +0000 UTC m=+1128.198810893" Jan 07 03:51:21 crc kubenswrapper[4980]: I0107 03:51:21.668584 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-558f69c5-b5wnd" podStartSLOduration=2.963022518 podStartE2EDuration="6.668564794s" podCreationTimestamp="2026-01-07 03:51:15 +0000 UTC" firstStartedPulling="2026-01-07 03:51:16.844023255 +0000 UTC m=+1123.409717990" lastFinishedPulling="2026-01-07 03:51:20.549565531 +0000 UTC m=+1127.115260266" observedRunningTime="2026-01-07 03:51:21.664427275 +0000 UTC m=+1128.230122010" watchObservedRunningTime="2026-01-07 03:51:21.668564794 +0000 UTC m=+1128.234259529" Jan 07 03:51:21 crc kubenswrapper[4980]: I0107 03:51:21.696543 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6f67c95874-pm99w" podStartSLOduration=3.696525056 podStartE2EDuration="3.696525056s" podCreationTimestamp="2026-01-07 03:51:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:51:21.694777362 +0000 UTC m=+1128.260472087" watchObservedRunningTime="2026-01-07 03:51:21.696525056 +0000 UTC m=+1128.262219791" Jan 07 03:51:22 crc kubenswrapper[4980]: I0107 03:51:22.411471 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-547b7ddd64-7hclw" podUID="a806c806-4d43-4a04-aefa-0544f2a5175f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 07 03:51:22 crc kubenswrapper[4980]: I0107 03:51:22.458610 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-565d4f6c4b-gj6mz" podUID="5d0304bc-69af-4a65-90e0-088a428990a1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 07 03:51:22 crc kubenswrapper[4980]: I0107 03:51:22.648971 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f54697b86-x8z4v" event={"ID":"0d68262c-96ba-42af-8b46-f13aa424ba0d","Type":"ContainerStarted","Data":"c49e3b8a9812912d289eca80bcb0d06d10f8b450a0fd3adc5194106a8050f6d4"} Jan 07 03:51:22 crc kubenswrapper[4980]: I0107 03:51:22.650043 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f54697b86-x8z4v" event={"ID":"0d68262c-96ba-42af-8b46-f13aa424ba0d","Type":"ContainerStarted","Data":"fc6d806bda275d681b6645b4e33e8e64fa8389ca6976701cad5ec9887e420c87"} Jan 07 03:51:22 crc kubenswrapper[4980]: I0107 03:51:22.650151 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:22 crc kubenswrapper[4980]: I0107 03:51:22.650217 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:22 crc kubenswrapper[4980]: I0107 03:51:22.663919 4980 generic.go:334] "Generic (PLEG): container finished" podID="2df19fa0-d6ce-4539-8f87-9d6935314e82" containerID="ca79779095f3d5c4e3cee36263df16743fdac1a089634cdb6a5103e151bf4164" exitCode=0 Jan 07 03:51:22 crc kubenswrapper[4980]: I0107 03:51:22.664067 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b9flf" event={"ID":"2df19fa0-d6ce-4539-8f87-9d6935314e82","Type":"ContainerDied","Data":"ca79779095f3d5c4e3cee36263df16743fdac1a089634cdb6a5103e151bf4164"} Jan 07 03:51:22 crc kubenswrapper[4980]: I0107 03:51:22.677899 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7f54697b86-x8z4v" podStartSLOduration=2.677885046 podStartE2EDuration="2.677885046s" podCreationTimestamp="2026-01-07 03:51:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:51:22.677005568 +0000 UTC m=+1129.242700323" watchObservedRunningTime="2026-01-07 03:51:22.677885046 +0000 UTC m=+1129.243579781" Jan 07 03:51:26 crc kubenswrapper[4980]: I0107 03:51:26.398251 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:26 crc kubenswrapper[4980]: I0107 03:51:26.482624 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-tbmx2"] Jan 07 03:51:26 crc kubenswrapper[4980]: I0107 03:51:26.484023 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" podUID="60adb166-cc77-4a92-833b-59621ae07155" containerName="dnsmasq-dns" containerID="cri-o://89d6786902dd709102cd3e7626e49ec8ad82df65d0ecd95b99bac9bd63ad8ca1" gracePeriod=10 Jan 07 03:51:26 crc kubenswrapper[4980]: I0107 03:51:26.727959 4980 generic.go:334] "Generic (PLEG): container finished" podID="60adb166-cc77-4a92-833b-59621ae07155" containerID="89d6786902dd709102cd3e7626e49ec8ad82df65d0ecd95b99bac9bd63ad8ca1" exitCode=0 Jan 07 03:51:26 crc kubenswrapper[4980]: I0107 03:51:26.728004 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" event={"ID":"60adb166-cc77-4a92-833b-59621ae07155","Type":"ContainerDied","Data":"89d6786902dd709102cd3e7626e49ec8ad82df65d0ecd95b99bac9bd63ad8ca1"} Jan 07 03:51:27 crc kubenswrapper[4980]: I0107 03:51:27.753699 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b9flf" event={"ID":"2df19fa0-d6ce-4539-8f87-9d6935314e82","Type":"ContainerDied","Data":"d0f234bb728557e8a9ca35e821880d36ed252e171b1dc07bd2f8d0a6d4eb3244"} Jan 07 03:51:27 crc kubenswrapper[4980]: I0107 03:51:27.753956 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0f234bb728557e8a9ca35e821880d36ed252e171b1dc07bd2f8d0a6d4eb3244" Jan 07 03:51:27 crc kubenswrapper[4980]: I0107 03:51:27.822002 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b9flf" Jan 07 03:51:27 crc kubenswrapper[4980]: I0107 03:51:27.924349 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-db-sync-config-data\") pod \"2df19fa0-d6ce-4539-8f87-9d6935314e82\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " Jan 07 03:51:27 crc kubenswrapper[4980]: I0107 03:51:27.924431 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-scripts\") pod \"2df19fa0-d6ce-4539-8f87-9d6935314e82\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " Jan 07 03:51:27 crc kubenswrapper[4980]: I0107 03:51:27.924529 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-config-data\") pod \"2df19fa0-d6ce-4539-8f87-9d6935314e82\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " Jan 07 03:51:27 crc kubenswrapper[4980]: I0107 03:51:27.927726 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mhc7\" (UniqueName: \"kubernetes.io/projected/2df19fa0-d6ce-4539-8f87-9d6935314e82-kube-api-access-2mhc7\") pod \"2df19fa0-d6ce-4539-8f87-9d6935314e82\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " Jan 07 03:51:27 crc kubenswrapper[4980]: I0107 03:51:27.927809 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2df19fa0-d6ce-4539-8f87-9d6935314e82-etc-machine-id\") pod \"2df19fa0-d6ce-4539-8f87-9d6935314e82\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " Jan 07 03:51:27 crc kubenswrapper[4980]: I0107 03:51:27.927848 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-combined-ca-bundle\") pod \"2df19fa0-d6ce-4539-8f87-9d6935314e82\" (UID: \"2df19fa0-d6ce-4539-8f87-9d6935314e82\") " Jan 07 03:51:27 crc kubenswrapper[4980]: I0107 03:51:27.938062 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2df19fa0-d6ce-4539-8f87-9d6935314e82" (UID: "2df19fa0-d6ce-4539-8f87-9d6935314e82"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:27 crc kubenswrapper[4980]: I0107 03:51:27.938902 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2df19fa0-d6ce-4539-8f87-9d6935314e82-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2df19fa0-d6ce-4539-8f87-9d6935314e82" (UID: "2df19fa0-d6ce-4539-8f87-9d6935314e82"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:51:27 crc kubenswrapper[4980]: I0107 03:51:27.944395 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2df19fa0-d6ce-4539-8f87-9d6935314e82-kube-api-access-2mhc7" (OuterVolumeSpecName: "kube-api-access-2mhc7") pod "2df19fa0-d6ce-4539-8f87-9d6935314e82" (UID: "2df19fa0-d6ce-4539-8f87-9d6935314e82"). InnerVolumeSpecName "kube-api-access-2mhc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:51:27 crc kubenswrapper[4980]: I0107 03:51:27.961756 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-scripts" (OuterVolumeSpecName: "scripts") pod "2df19fa0-d6ce-4539-8f87-9d6935314e82" (UID: "2df19fa0-d6ce-4539-8f87-9d6935314e82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:28 crc kubenswrapper[4980]: I0107 03:51:28.020413 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2df19fa0-d6ce-4539-8f87-9d6935314e82" (UID: "2df19fa0-d6ce-4539-8f87-9d6935314e82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:28 crc kubenswrapper[4980]: I0107 03:51:28.030090 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mhc7\" (UniqueName: \"kubernetes.io/projected/2df19fa0-d6ce-4539-8f87-9d6935314e82-kube-api-access-2mhc7\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:28 crc kubenswrapper[4980]: I0107 03:51:28.030143 4980 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2df19fa0-d6ce-4539-8f87-9d6935314e82-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:28 crc kubenswrapper[4980]: I0107 03:51:28.030155 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:28 crc kubenswrapper[4980]: I0107 03:51:28.030165 4980 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:28 crc kubenswrapper[4980]: I0107 03:51:28.030188 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:28 crc kubenswrapper[4980]: I0107 03:51:28.088681 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-config-data" (OuterVolumeSpecName: "config-data") pod "2df19fa0-d6ce-4539-8f87-9d6935314e82" (UID: "2df19fa0-d6ce-4539-8f87-9d6935314e82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:28 crc kubenswrapper[4980]: I0107 03:51:28.131966 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2df19fa0-d6ce-4539-8f87-9d6935314e82-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:28 crc kubenswrapper[4980]: I0107 03:51:28.439975 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:28 crc kubenswrapper[4980]: I0107 03:51:28.684180 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:28 crc kubenswrapper[4980]: I0107 03:51:28.788363 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b9flf" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.040285 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 07 03:51:29 crc kubenswrapper[4980]: E0107 03:51:29.044922 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2df19fa0-d6ce-4539-8f87-9d6935314e82" containerName="cinder-db-sync" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.044950 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="2df19fa0-d6ce-4539-8f87-9d6935314e82" containerName="cinder-db-sync" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.045170 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="2df19fa0-d6ce-4539-8f87-9d6935314e82" containerName="cinder-db-sync" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.046172 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.054338 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.054360 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.054496 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fkl2k" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.054616 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.065885 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.114631 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2b5wd"] Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.116201 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.152963 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d3da149-0a31-4679-a580-69cc3d54ac1e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.153017 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.153095 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.153112 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-scripts\") pod \"cinder-scheduler-0\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.153151 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-config-data\") pod \"cinder-scheduler-0\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.153211 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f22g\" (UniqueName: \"kubernetes.io/projected/4d3da149-0a31-4679-a580-69cc3d54ac1e-kube-api-access-8f22g\") pod \"cinder-scheduler-0\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.171362 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2b5wd"] Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.254952 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-2b5wd\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.255239 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f22g\" (UniqueName: \"kubernetes.io/projected/4d3da149-0a31-4679-a580-69cc3d54ac1e-kube-api-access-8f22g\") pod \"cinder-scheduler-0\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.255350 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d3da149-0a31-4679-a580-69cc3d54ac1e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.255429 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-2b5wd\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.255505 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.255706 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.255791 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-2b5wd\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.255874 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-scripts\") pod \"cinder-scheduler-0\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.255970 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-config-data\") pod \"cinder-scheduler-0\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.256051 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-config\") pod \"dnsmasq-dns-5c9776ccc5-2b5wd\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.256126 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kjb5\" (UniqueName: \"kubernetes.io/projected/e34a3269-fb24-4cc9-9f82-29784752137a-kube-api-access-2kjb5\") pod \"dnsmasq-dns-5c9776ccc5-2b5wd\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.256219 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-2b5wd\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.256694 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d3da149-0a31-4679-a580-69cc3d54ac1e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.264232 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.267419 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-config-data\") pod \"cinder-scheduler-0\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.268697 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.280439 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-scripts\") pod \"cinder-scheduler-0\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.283826 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f22g\" (UniqueName: \"kubernetes.io/projected/4d3da149-0a31-4679-a580-69cc3d54ac1e-kube-api-access-8f22g\") pod \"cinder-scheduler-0\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.329623 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.331544 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.331674 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.337540 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.358519 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-2b5wd\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.358600 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-2b5wd\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.358660 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-config\") pod \"dnsmasq-dns-5c9776ccc5-2b5wd\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.358680 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kjb5\" (UniqueName: \"kubernetes.io/projected/e34a3269-fb24-4cc9-9f82-29784752137a-kube-api-access-2kjb5\") pod \"dnsmasq-dns-5c9776ccc5-2b5wd\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.358722 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-2b5wd\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.358743 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-2b5wd\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.364396 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-2b5wd\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.365026 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-2b5wd\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.365540 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-2b5wd\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.372686 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-2b5wd\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.381208 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-config\") pod \"dnsmasq-dns-5c9776ccc5-2b5wd\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.388053 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kjb5\" (UniqueName: \"kubernetes.io/projected/e34a3269-fb24-4cc9-9f82-29784752137a-kube-api-access-2kjb5\") pod \"dnsmasq-dns-5c9776ccc5-2b5wd\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.403729 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.460898 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6a8c45a-6172-461f-b674-784045e96557-logs\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.460966 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-config-data\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.461007 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-config-data-custom\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.461029 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-scripts\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.461048 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52ztl\" (UniqueName: \"kubernetes.io/projected/c6a8c45a-6172-461f-b674-784045e96557-kube-api-access-52ztl\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.461132 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.461154 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6a8c45a-6172-461f-b674-784045e96557-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.471750 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.563179 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-config-data-custom\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.563479 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-scripts\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.563502 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52ztl\" (UniqueName: \"kubernetes.io/projected/c6a8c45a-6172-461f-b674-784045e96557-kube-api-access-52ztl\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.563541 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.563576 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6a8c45a-6172-461f-b674-784045e96557-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.563680 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6a8c45a-6172-461f-b674-784045e96557-logs\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.563704 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-config-data\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.563744 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6a8c45a-6172-461f-b674-784045e96557-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.564182 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6a8c45a-6172-461f-b674-784045e96557-logs\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.568046 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-scripts\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.572109 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.572274 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-config-data-custom\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.587085 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52ztl\" (UniqueName: \"kubernetes.io/projected/c6a8c45a-6172-461f-b674-784045e96557-kube-api-access-52ztl\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.593299 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-config-data\") pod \"cinder-api-0\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " pod="openstack/cinder-api-0" Jan 07 03:51:29 crc kubenswrapper[4980]: I0107 03:51:29.743624 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 07 03:51:30 crc kubenswrapper[4980]: I0107 03:51:30.770612 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:30 crc kubenswrapper[4980]: I0107 03:51:30.950436 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f67c95874-pm99w" Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.065653 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-76477f569b-rknll"] Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.066160 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-76477f569b-rknll" podUID="1311b795-d460-4939-8ba0-73e023eb9940" containerName="barbican-api-log" containerID="cri-o://0e86a2e894b7bfba2df7a16643e45d451153e319d788ef70232eafdbe5784175" gracePeriod=30 Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.066614 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-76477f569b-rknll" podUID="1311b795-d460-4939-8ba0-73e023eb9940" containerName="barbican-api" containerID="cri-o://820afbd26c93cfd13e5a0ceaa61e7b597b55d586b7c8edde6a0aaca5d6833add" gracePeriod=30 Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.284993 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.401253 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-ovsdbserver-sb\") pod \"60adb166-cc77-4a92-833b-59621ae07155\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.401635 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rpcr\" (UniqueName: \"kubernetes.io/projected/60adb166-cc77-4a92-833b-59621ae07155-kube-api-access-8rpcr\") pod \"60adb166-cc77-4a92-833b-59621ae07155\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.401678 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-ovsdbserver-nb\") pod \"60adb166-cc77-4a92-833b-59621ae07155\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.401742 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-dns-swift-storage-0\") pod \"60adb166-cc77-4a92-833b-59621ae07155\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.401780 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-config\") pod \"60adb166-cc77-4a92-833b-59621ae07155\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.401855 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-dns-svc\") pod \"60adb166-cc77-4a92-833b-59621ae07155\" (UID: \"60adb166-cc77-4a92-833b-59621ae07155\") " Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.517975 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60adb166-cc77-4a92-833b-59621ae07155-kube-api-access-8rpcr" (OuterVolumeSpecName: "kube-api-access-8rpcr") pod "60adb166-cc77-4a92-833b-59621ae07155" (UID: "60adb166-cc77-4a92-833b-59621ae07155"). InnerVolumeSpecName "kube-api-access-8rpcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.535085 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.535954 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "60adb166-cc77-4a92-833b-59621ae07155" (UID: "60adb166-cc77-4a92-833b-59621ae07155"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.611356 4980 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.611394 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rpcr\" (UniqueName: \"kubernetes.io/projected/60adb166-cc77-4a92-833b-59621ae07155-kube-api-access-8rpcr\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:31 crc kubenswrapper[4980]: E0107 03:51:31.643413 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="223a79c7-9460-4b16-a65d-90e2c0751dfa" Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.653096 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "60adb166-cc77-4a92-833b-59621ae07155" (UID: "60adb166-cc77-4a92-833b-59621ae07155"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.659300 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-config" (OuterVolumeSpecName: "config") pod "60adb166-cc77-4a92-833b-59621ae07155" (UID: "60adb166-cc77-4a92-833b-59621ae07155"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.662501 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "60adb166-cc77-4a92-833b-59621ae07155" (UID: "60adb166-cc77-4a92-833b-59621ae07155"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.670201 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "60adb166-cc77-4a92-833b-59621ae07155" (UID: "60adb166-cc77-4a92-833b-59621ae07155"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.712894 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.712936 4980 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.712949 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.712959 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60adb166-cc77-4a92-833b-59621ae07155-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.811985 4980 generic.go:334] "Generic (PLEG): container finished" podID="1311b795-d460-4939-8ba0-73e023eb9940" containerID="0e86a2e894b7bfba2df7a16643e45d451153e319d788ef70232eafdbe5784175" exitCode=143 Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.812184 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76477f569b-rknll" event={"ID":"1311b795-d460-4939-8ba0-73e023eb9940","Type":"ContainerDied","Data":"0e86a2e894b7bfba2df7a16643e45d451153e319d788ef70232eafdbe5784175"} Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.820041 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"223a79c7-9460-4b16-a65d-90e2c0751dfa","Type":"ContainerStarted","Data":"401f24e9a750e5f7ae969742e07a4a2733e2ed7488d92431901b98f0d22bb1ca"} Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.820212 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="223a79c7-9460-4b16-a65d-90e2c0751dfa" containerName="ceilometer-notification-agent" containerID="cri-o://b9934fc35eb23d126049fc2556100181967f389cadf8abbce1143ebbbccd096c" gracePeriod=30 Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.820472 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.820837 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="223a79c7-9460-4b16-a65d-90e2c0751dfa" containerName="proxy-httpd" containerID="cri-o://401f24e9a750e5f7ae969742e07a4a2733e2ed7488d92431901b98f0d22bb1ca" gracePeriod=30 Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.820889 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="223a79c7-9460-4b16-a65d-90e2c0751dfa" containerName="sg-core" containerID="cri-o://b82f3d1bb2080ec8ea42e235419727baaf200a445ac848a5ba89f919bdb7d666" gracePeriod=30 Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.843915 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.850403 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" event={"ID":"60adb166-cc77-4a92-833b-59621ae07155","Type":"ContainerDied","Data":"2263c1ee3bbbb74519369b63a2166ce2712cef2dbe14b10b5a2de45f0143458c"} Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.850474 4980 scope.go:117] "RemoveContainer" containerID="89d6786902dd709102cd3e7626e49ec8ad82df65d0ecd95b99bac9bd63ad8ca1" Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.851022 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.883895 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 07 03:51:31 crc kubenswrapper[4980]: W0107 03:51:31.904634 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d3da149_0a31_4679_a580_69cc3d54ac1e.slice/crio-c3901487edf79f978c9d4bf18af903e89f937bf0effc41aa75bf2624cd202473 WatchSource:0}: Error finding container c3901487edf79f978c9d4bf18af903e89f937bf0effc41aa75bf2624cd202473: Status 404 returned error can't find the container with id c3901487edf79f978c9d4bf18af903e89f937bf0effc41aa75bf2624cd202473 Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.909380 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-tbmx2"] Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.922317 4980 scope.go:117] "RemoveContainer" containerID="b3577780f33f804a49397e1954350b7a46dbad74774040ae259e3837b66eca46" Jan 07 03:51:31 crc kubenswrapper[4980]: I0107 03:51:31.945735 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-tbmx2"] Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.014913 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2b5wd"] Jan 07 03:51:32 crc kubenswrapper[4980]: E0107 03:51:32.508653 4980 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod223a79c7_9460_4b16_a65d_90e2c0751dfa.slice/crio-401f24e9a750e5f7ae969742e07a4a2733e2ed7488d92431901b98f0d22bb1ca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod223a79c7_9460_4b16_a65d_90e2c0751dfa.slice/crio-conmon-401f24e9a750e5f7ae969742e07a4a2733e2ed7488d92431901b98f0d22bb1ca.scope\": RecentStats: unable to find data in memory cache]" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.537608 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.636091 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.637784 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/37c5e78b-d013-4936-a6a0-639aff10ff45-horizon-secret-key\") pod \"37c5e78b-d013-4936-a6a0-639aff10ff45\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.637911 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37c5e78b-d013-4936-a6a0-639aff10ff45-scripts\") pod \"37c5e78b-d013-4936-a6a0-639aff10ff45\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.637993 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37c5e78b-d013-4936-a6a0-639aff10ff45-config-data\") pod \"37c5e78b-d013-4936-a6a0-639aff10ff45\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.638171 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37c5e78b-d013-4936-a6a0-639aff10ff45-logs\") pod \"37c5e78b-d013-4936-a6a0-639aff10ff45\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.638249 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k4wq\" (UniqueName: \"kubernetes.io/projected/37c5e78b-d013-4936-a6a0-639aff10ff45-kube-api-access-7k4wq\") pod \"37c5e78b-d013-4936-a6a0-639aff10ff45\" (UID: \"37c5e78b-d013-4936-a6a0-639aff10ff45\") " Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.641487 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37c5e78b-d013-4936-a6a0-639aff10ff45-logs" (OuterVolumeSpecName: "logs") pod "37c5e78b-d013-4936-a6a0-639aff10ff45" (UID: "37c5e78b-d013-4936-a6a0-639aff10ff45"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.646032 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c5e78b-d013-4936-a6a0-639aff10ff45-kube-api-access-7k4wq" (OuterVolumeSpecName: "kube-api-access-7k4wq") pod "37c5e78b-d013-4936-a6a0-639aff10ff45" (UID: "37c5e78b-d013-4936-a6a0-639aff10ff45"). InnerVolumeSpecName "kube-api-access-7k4wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.649429 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c5e78b-d013-4936-a6a0-639aff10ff45-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "37c5e78b-d013-4936-a6a0-639aff10ff45" (UID: "37c5e78b-d013-4936-a6a0-639aff10ff45"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.678998 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37c5e78b-d013-4936-a6a0-639aff10ff45-config-data" (OuterVolumeSpecName: "config-data") pod "37c5e78b-d013-4936-a6a0-639aff10ff45" (UID: "37c5e78b-d013-4936-a6a0-639aff10ff45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.711133 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37c5e78b-d013-4936-a6a0-639aff10ff45-scripts" (OuterVolumeSpecName: "scripts") pod "37c5e78b-d013-4936-a6a0-639aff10ff45" (UID: "37c5e78b-d013-4936-a6a0-639aff10ff45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.742216 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99klz\" (UniqueName: \"kubernetes.io/projected/02ac366d-3498-4985-b9d8-5d145f5c3048-kube-api-access-99klz\") pod \"02ac366d-3498-4985-b9d8-5d145f5c3048\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.742310 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02ac366d-3498-4985-b9d8-5d145f5c3048-logs\") pod \"02ac366d-3498-4985-b9d8-5d145f5c3048\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.742359 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02ac366d-3498-4985-b9d8-5d145f5c3048-config-data\") pod \"02ac366d-3498-4985-b9d8-5d145f5c3048\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.742407 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02ac366d-3498-4985-b9d8-5d145f5c3048-scripts\") pod \"02ac366d-3498-4985-b9d8-5d145f5c3048\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.742469 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02ac366d-3498-4985-b9d8-5d145f5c3048-horizon-secret-key\") pod \"02ac366d-3498-4985-b9d8-5d145f5c3048\" (UID: \"02ac366d-3498-4985-b9d8-5d145f5c3048\") " Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.742926 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k4wq\" (UniqueName: \"kubernetes.io/projected/37c5e78b-d013-4936-a6a0-639aff10ff45-kube-api-access-7k4wq\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.742942 4980 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/37c5e78b-d013-4936-a6a0-639aff10ff45-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.742951 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37c5e78b-d013-4936-a6a0-639aff10ff45-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.742961 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37c5e78b-d013-4936-a6a0-639aff10ff45-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.742971 4980 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37c5e78b-d013-4936-a6a0-639aff10ff45-logs\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.743176 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02ac366d-3498-4985-b9d8-5d145f5c3048-logs" (OuterVolumeSpecName: "logs") pod "02ac366d-3498-4985-b9d8-5d145f5c3048" (UID: "02ac366d-3498-4985-b9d8-5d145f5c3048"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.751148 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ac366d-3498-4985-b9d8-5d145f5c3048-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "02ac366d-3498-4985-b9d8-5d145f5c3048" (UID: "02ac366d-3498-4985-b9d8-5d145f5c3048"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.752117 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02ac366d-3498-4985-b9d8-5d145f5c3048-kube-api-access-99klz" (OuterVolumeSpecName: "kube-api-access-99klz") pod "02ac366d-3498-4985-b9d8-5d145f5c3048" (UID: "02ac366d-3498-4985-b9d8-5d145f5c3048"). InnerVolumeSpecName "kube-api-access-99klz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.770942 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02ac366d-3498-4985-b9d8-5d145f5c3048-scripts" (OuterVolumeSpecName: "scripts") pod "02ac366d-3498-4985-b9d8-5d145f5c3048" (UID: "02ac366d-3498-4985-b9d8-5d145f5c3048"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.785598 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02ac366d-3498-4985-b9d8-5d145f5c3048-config-data" (OuterVolumeSpecName: "config-data") pod "02ac366d-3498-4985-b9d8-5d145f5c3048" (UID: "02ac366d-3498-4985-b9d8-5d145f5c3048"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.844759 4980 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02ac366d-3498-4985-b9d8-5d145f5c3048-logs\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.844796 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02ac366d-3498-4985-b9d8-5d145f5c3048-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.844808 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02ac366d-3498-4985-b9d8-5d145f5c3048-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.844817 4980 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02ac366d-3498-4985-b9d8-5d145f5c3048-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.844826 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99klz\" (UniqueName: \"kubernetes.io/projected/02ac366d-3498-4985-b9d8-5d145f5c3048-kube-api-access-99klz\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.888259 4980 generic.go:334] "Generic (PLEG): container finished" podID="37c5e78b-d013-4936-a6a0-639aff10ff45" containerID="cc9e12ea308d8375f0fe6ad1b6e4dcae421ce6d774c815dcea6917af5b4efa45" exitCode=137 Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.888321 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f6b9d9cdf-2t4tv" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.888330 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f6b9d9cdf-2t4tv" event={"ID":"37c5e78b-d013-4936-a6a0-639aff10ff45","Type":"ContainerDied","Data":"cc9e12ea308d8375f0fe6ad1b6e4dcae421ce6d774c815dcea6917af5b4efa45"} Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.888359 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f6b9d9cdf-2t4tv" event={"ID":"37c5e78b-d013-4936-a6a0-639aff10ff45","Type":"ContainerDied","Data":"52d3a39fb0d8e4deed60674de64979a1c7295cd707e0ce6a0860c3d8ad4c7c70"} Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.888393 4980 scope.go:117] "RemoveContainer" containerID="cc9e12ea308d8375f0fe6ad1b6e4dcae421ce6d774c815dcea6917af5b4efa45" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.900163 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c6a8c45a-6172-461f-b674-784045e96557","Type":"ContainerStarted","Data":"e9fee459e41629d1b8c14689dae010612ca9a7d04e0185ee4d9036d4542311cc"} Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.900208 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c6a8c45a-6172-461f-b674-784045e96557","Type":"ContainerStarted","Data":"b193028c29162c271c1342655620054858e6c84aa14dab045c13526ed8154f77"} Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.924316 4980 generic.go:334] "Generic (PLEG): container finished" podID="223a79c7-9460-4b16-a65d-90e2c0751dfa" containerID="401f24e9a750e5f7ae969742e07a4a2733e2ed7488d92431901b98f0d22bb1ca" exitCode=0 Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.924350 4980 generic.go:334] "Generic (PLEG): container finished" podID="223a79c7-9460-4b16-a65d-90e2c0751dfa" containerID="b82f3d1bb2080ec8ea42e235419727baaf200a445ac848a5ba89f919bdb7d666" exitCode=2 Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.924405 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"223a79c7-9460-4b16-a65d-90e2c0751dfa","Type":"ContainerDied","Data":"401f24e9a750e5f7ae969742e07a4a2733e2ed7488d92431901b98f0d22bb1ca"} Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.924433 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"223a79c7-9460-4b16-a65d-90e2c0751dfa","Type":"ContainerDied","Data":"b82f3d1bb2080ec8ea42e235419727baaf200a445ac848a5ba89f919bdb7d666"} Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.957113 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f6b9d9cdf-2t4tv"] Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.961715 4980 generic.go:334] "Generic (PLEG): container finished" podID="e34a3269-fb24-4cc9-9f82-29784752137a" containerID="501edb08c4f717a7a0644e2e379dfe88410b64e74d4423bfba4853318c0bc894" exitCode=0 Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.961970 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" event={"ID":"e34a3269-fb24-4cc9-9f82-29784752137a","Type":"ContainerDied","Data":"501edb08c4f717a7a0644e2e379dfe88410b64e74d4423bfba4853318c0bc894"} Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.962020 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" event={"ID":"e34a3269-fb24-4cc9-9f82-29784752137a","Type":"ContainerStarted","Data":"7ee29ade6a071d7c73ce95e64904edfe81fa83721a220bc95c5192bd1d286dca"} Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.974418 4980 generic.go:334] "Generic (PLEG): container finished" podID="02ac366d-3498-4985-b9d8-5d145f5c3048" containerID="e78489d207f8bf1ba21e08c0ed11d851a91df66573a02c9e5447f50c20ea30d9" exitCode=137 Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.974511 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7658bcf7b7-pdhtz" event={"ID":"02ac366d-3498-4985-b9d8-5d145f5c3048","Type":"ContainerDied","Data":"e78489d207f8bf1ba21e08c0ed11d851a91df66573a02c9e5447f50c20ea30d9"} Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.974543 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7658bcf7b7-pdhtz" event={"ID":"02ac366d-3498-4985-b9d8-5d145f5c3048","Type":"ContainerDied","Data":"ae764c7cb94c810f41491bac3dd616ae17de0553a9329285ec26df903913e708"} Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.974617 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7658bcf7b7-pdhtz" Jan 07 03:51:32 crc kubenswrapper[4980]: I0107 03:51:32.996495 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f6b9d9cdf-2t4tv"] Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.009735 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4d3da149-0a31-4679-a580-69cc3d54ac1e","Type":"ContainerStarted","Data":"c3901487edf79f978c9d4bf18af903e89f937bf0effc41aa75bf2624cd202473"} Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.076618 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7658bcf7b7-pdhtz"] Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.087772 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7658bcf7b7-pdhtz"] Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.124021 4980 scope.go:117] "RemoveContainer" containerID="cc9e12ea308d8375f0fe6ad1b6e4dcae421ce6d774c815dcea6917af5b4efa45" Jan 07 03:51:33 crc kubenswrapper[4980]: E0107 03:51:33.127878 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc9e12ea308d8375f0fe6ad1b6e4dcae421ce6d774c815dcea6917af5b4efa45\": container with ID starting with cc9e12ea308d8375f0fe6ad1b6e4dcae421ce6d774c815dcea6917af5b4efa45 not found: ID does not exist" containerID="cc9e12ea308d8375f0fe6ad1b6e4dcae421ce6d774c815dcea6917af5b4efa45" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.127924 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc9e12ea308d8375f0fe6ad1b6e4dcae421ce6d774c815dcea6917af5b4efa45"} err="failed to get container status \"cc9e12ea308d8375f0fe6ad1b6e4dcae421ce6d774c815dcea6917af5b4efa45\": rpc error: code = NotFound desc = could not find container \"cc9e12ea308d8375f0fe6ad1b6e4dcae421ce6d774c815dcea6917af5b4efa45\": container with ID starting with cc9e12ea308d8375f0fe6ad1b6e4dcae421ce6d774c815dcea6917af5b4efa45 not found: ID does not exist" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.127948 4980 scope.go:117] "RemoveContainer" containerID="e78489d207f8bf1ba21e08c0ed11d851a91df66573a02c9e5447f50c20ea30d9" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.482652 4980 scope.go:117] "RemoveContainer" containerID="e78489d207f8bf1ba21e08c0ed11d851a91df66573a02c9e5447f50c20ea30d9" Jan 07 03:51:33 crc kubenswrapper[4980]: E0107 03:51:33.483718 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e78489d207f8bf1ba21e08c0ed11d851a91df66573a02c9e5447f50c20ea30d9\": container with ID starting with e78489d207f8bf1ba21e08c0ed11d851a91df66573a02c9e5447f50c20ea30d9 not found: ID does not exist" containerID="e78489d207f8bf1ba21e08c0ed11d851a91df66573a02c9e5447f50c20ea30d9" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.483777 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e78489d207f8bf1ba21e08c0ed11d851a91df66573a02c9e5447f50c20ea30d9"} err="failed to get container status \"e78489d207f8bf1ba21e08c0ed11d851a91df66573a02c9e5447f50c20ea30d9\": rpc error: code = NotFound desc = could not find container \"e78489d207f8bf1ba21e08c0ed11d851a91df66573a02c9e5447f50c20ea30d9\": container with ID starting with e78489d207f8bf1ba21e08c0ed11d851a91df66573a02c9e5447f50c20ea30d9 not found: ID does not exist" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.598822 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-tbmx2" podUID="60adb166-cc77-4a92-833b-59621ae07155" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: i/o timeout" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.630914 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.719616 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.781912 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d74f1e6-2229-4ff7-8c80-b12d09285da4-scripts\") pod \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.783726 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d74f1e6-2229-4ff7-8c80-b12d09285da4-horizon-secret-key\") pod \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.783939 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d74f1e6-2229-4ff7-8c80-b12d09285da4-logs\") pod \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.784004 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxhl2\" (UniqueName: \"kubernetes.io/projected/9d74f1e6-2229-4ff7-8c80-b12d09285da4-kube-api-access-wxhl2\") pod \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.784071 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d74f1e6-2229-4ff7-8c80-b12d09285da4-config-data\") pod \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\" (UID: \"9d74f1e6-2229-4ff7-8c80-b12d09285da4\") " Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.785797 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d74f1e6-2229-4ff7-8c80-b12d09285da4-logs" (OuterVolumeSpecName: "logs") pod "9d74f1e6-2229-4ff7-8c80-b12d09285da4" (UID: "9d74f1e6-2229-4ff7-8c80-b12d09285da4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.791845 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d74f1e6-2229-4ff7-8c80-b12d09285da4-kube-api-access-wxhl2" (OuterVolumeSpecName: "kube-api-access-wxhl2") pod "9d74f1e6-2229-4ff7-8c80-b12d09285da4" (UID: "9d74f1e6-2229-4ff7-8c80-b12d09285da4"). InnerVolumeSpecName "kube-api-access-wxhl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.793538 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02ac366d-3498-4985-b9d8-5d145f5c3048" path="/var/lib/kubelet/pods/02ac366d-3498-4985-b9d8-5d145f5c3048/volumes" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.794153 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37c5e78b-d013-4936-a6a0-639aff10ff45" path="/var/lib/kubelet/pods/37c5e78b-d013-4936-a6a0-639aff10ff45/volumes" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.798901 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60adb166-cc77-4a92-833b-59621ae07155" path="/var/lib/kubelet/pods/60adb166-cc77-4a92-833b-59621ae07155/volumes" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.811142 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d74f1e6-2229-4ff7-8c80-b12d09285da4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9d74f1e6-2229-4ff7-8c80-b12d09285da4" (UID: "9d74f1e6-2229-4ff7-8c80-b12d09285da4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.829040 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d74f1e6-2229-4ff7-8c80-b12d09285da4-scripts" (OuterVolumeSpecName: "scripts") pod "9d74f1e6-2229-4ff7-8c80-b12d09285da4" (UID: "9d74f1e6-2229-4ff7-8c80-b12d09285da4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.839850 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d74f1e6-2229-4ff7-8c80-b12d09285da4-config-data" (OuterVolumeSpecName: "config-data") pod "9d74f1e6-2229-4ff7-8c80-b12d09285da4" (UID: "9d74f1e6-2229-4ff7-8c80-b12d09285da4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.887043 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d74f1e6-2229-4ff7-8c80-b12d09285da4-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.887077 4980 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d74f1e6-2229-4ff7-8c80-b12d09285da4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.887087 4980 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d74f1e6-2229-4ff7-8c80-b12d09285da4-logs\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.887096 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxhl2\" (UniqueName: \"kubernetes.io/projected/9d74f1e6-2229-4ff7-8c80-b12d09285da4-kube-api-access-wxhl2\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:33 crc kubenswrapper[4980]: I0107 03:51:33.887106 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d74f1e6-2229-4ff7-8c80-b12d09285da4-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.027111 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c6a8c45a-6172-461f-b674-784045e96557","Type":"ContainerStarted","Data":"96b8044404c04775a7752e59f0de2978470235d65cd7f6d9cbe067c7ad516553"} Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.027265 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c6a8c45a-6172-461f-b674-784045e96557" containerName="cinder-api-log" containerID="cri-o://e9fee459e41629d1b8c14689dae010612ca9a7d04e0185ee4d9036d4542311cc" gracePeriod=30 Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.027568 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.027836 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c6a8c45a-6172-461f-b674-784045e96557" containerName="cinder-api" containerID="cri-o://96b8044404c04775a7752e59f0de2978470235d65cd7f6d9cbe067c7ad516553" gracePeriod=30 Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.034494 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" event={"ID":"e34a3269-fb24-4cc9-9f82-29784752137a","Type":"ContainerStarted","Data":"c75ffdd141ae1fc5672e21e14b6838abcf68a947b26dd5c4926d2065d8247ef5"} Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.036472 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.057368 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.057343914 podStartE2EDuration="5.057343914s" podCreationTimestamp="2026-01-07 03:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:51:34.050600534 +0000 UTC m=+1140.616295279" watchObservedRunningTime="2026-01-07 03:51:34.057343914 +0000 UTC m=+1140.623038649" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.063021 4980 generic.go:334] "Generic (PLEG): container finished" podID="9d74f1e6-2229-4ff7-8c80-b12d09285da4" containerID="409f35ad2c7db689e8e29f2186a2f3769d40d2f3e43d63d026aa647611f6a592" exitCode=137 Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.063052 4980 generic.go:334] "Generic (PLEG): container finished" podID="9d74f1e6-2229-4ff7-8c80-b12d09285da4" containerID="d854ce87110918804b3f8fc8e714927f1435ad0a9afebfcce22347b43a9298a5" exitCode=137 Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.063100 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6fd86d65-8ldnw" event={"ID":"9d74f1e6-2229-4ff7-8c80-b12d09285da4","Type":"ContainerDied","Data":"409f35ad2c7db689e8e29f2186a2f3769d40d2f3e43d63d026aa647611f6a592"} Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.063128 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6fd86d65-8ldnw" event={"ID":"9d74f1e6-2229-4ff7-8c80-b12d09285da4","Type":"ContainerDied","Data":"d854ce87110918804b3f8fc8e714927f1435ad0a9afebfcce22347b43a9298a5"} Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.063139 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6fd86d65-8ldnw" event={"ID":"9d74f1e6-2229-4ff7-8c80-b12d09285da4","Type":"ContainerDied","Data":"980c543c3acef5c2165ccf6a21093814295d80a7c1dcb9e72159c8e4bea6b851"} Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.063154 4980 scope.go:117] "RemoveContainer" containerID="409f35ad2c7db689e8e29f2186a2f3769d40d2f3e43d63d026aa647611f6a592" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.063099 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6fd86d65-8ldnw" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.082319 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" podStartSLOduration=5.082303562 podStartE2EDuration="5.082303562s" podCreationTimestamp="2026-01-07 03:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:51:34.079185465 +0000 UTC m=+1140.644880200" watchObservedRunningTime="2026-01-07 03:51:34.082303562 +0000 UTC m=+1140.647998297" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.115108 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f6fd86d65-8ldnw"] Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.126036 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f6fd86d65-8ldnw"] Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.248955 4980 scope.go:117] "RemoveContainer" containerID="d854ce87110918804b3f8fc8e714927f1435ad0a9afebfcce22347b43a9298a5" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.427935 4980 scope.go:117] "RemoveContainer" containerID="409f35ad2c7db689e8e29f2186a2f3769d40d2f3e43d63d026aa647611f6a592" Jan 07 03:51:34 crc kubenswrapper[4980]: E0107 03:51:34.428374 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"409f35ad2c7db689e8e29f2186a2f3769d40d2f3e43d63d026aa647611f6a592\": container with ID starting with 409f35ad2c7db689e8e29f2186a2f3769d40d2f3e43d63d026aa647611f6a592 not found: ID does not exist" containerID="409f35ad2c7db689e8e29f2186a2f3769d40d2f3e43d63d026aa647611f6a592" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.428400 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409f35ad2c7db689e8e29f2186a2f3769d40d2f3e43d63d026aa647611f6a592"} err="failed to get container status \"409f35ad2c7db689e8e29f2186a2f3769d40d2f3e43d63d026aa647611f6a592\": rpc error: code = NotFound desc = could not find container \"409f35ad2c7db689e8e29f2186a2f3769d40d2f3e43d63d026aa647611f6a592\": container with ID starting with 409f35ad2c7db689e8e29f2186a2f3769d40d2f3e43d63d026aa647611f6a592 not found: ID does not exist" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.428420 4980 scope.go:117] "RemoveContainer" containerID="d854ce87110918804b3f8fc8e714927f1435ad0a9afebfcce22347b43a9298a5" Jan 07 03:51:34 crc kubenswrapper[4980]: E0107 03:51:34.432837 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d854ce87110918804b3f8fc8e714927f1435ad0a9afebfcce22347b43a9298a5\": container with ID starting with d854ce87110918804b3f8fc8e714927f1435ad0a9afebfcce22347b43a9298a5 not found: ID does not exist" containerID="d854ce87110918804b3f8fc8e714927f1435ad0a9afebfcce22347b43a9298a5" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.432872 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d854ce87110918804b3f8fc8e714927f1435ad0a9afebfcce22347b43a9298a5"} err="failed to get container status \"d854ce87110918804b3f8fc8e714927f1435ad0a9afebfcce22347b43a9298a5\": rpc error: code = NotFound desc = could not find container \"d854ce87110918804b3f8fc8e714927f1435ad0a9afebfcce22347b43a9298a5\": container with ID starting with d854ce87110918804b3f8fc8e714927f1435ad0a9afebfcce22347b43a9298a5 not found: ID does not exist" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.432893 4980 scope.go:117] "RemoveContainer" containerID="409f35ad2c7db689e8e29f2186a2f3769d40d2f3e43d63d026aa647611f6a592" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.434057 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409f35ad2c7db689e8e29f2186a2f3769d40d2f3e43d63d026aa647611f6a592"} err="failed to get container status \"409f35ad2c7db689e8e29f2186a2f3769d40d2f3e43d63d026aa647611f6a592\": rpc error: code = NotFound desc = could not find container \"409f35ad2c7db689e8e29f2186a2f3769d40d2f3e43d63d026aa647611f6a592\": container with ID starting with 409f35ad2c7db689e8e29f2186a2f3769d40d2f3e43d63d026aa647611f6a592 not found: ID does not exist" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.434081 4980 scope.go:117] "RemoveContainer" containerID="d854ce87110918804b3f8fc8e714927f1435ad0a9afebfcce22347b43a9298a5" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.451842 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d854ce87110918804b3f8fc8e714927f1435ad0a9afebfcce22347b43a9298a5"} err="failed to get container status \"d854ce87110918804b3f8fc8e714927f1435ad0a9afebfcce22347b43a9298a5\": rpc error: code = NotFound desc = could not find container \"d854ce87110918804b3f8fc8e714927f1435ad0a9afebfcce22347b43a9298a5\": container with ID starting with d854ce87110918804b3f8fc8e714927f1435ad0a9afebfcce22347b43a9298a5 not found: ID does not exist" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.664159 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.679337 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.732693 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76477f569b-rknll" podUID="1311b795-d460-4939-8ba0-73e023eb9940" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:34836->10.217.0.159:9311: read: connection reset by peer" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.732704 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76477f569b-rknll" podUID="1311b795-d460-4939-8ba0-73e023eb9940" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:34822->10.217.0.159:9311: read: connection reset by peer" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.805058 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w2ql\" (UniqueName: \"kubernetes.io/projected/223a79c7-9460-4b16-a65d-90e2c0751dfa-kube-api-access-8w2ql\") pod \"223a79c7-9460-4b16-a65d-90e2c0751dfa\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.805118 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223a79c7-9460-4b16-a65d-90e2c0751dfa-run-httpd\") pod \"223a79c7-9460-4b16-a65d-90e2c0751dfa\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.805187 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-config-data\") pod \"223a79c7-9460-4b16-a65d-90e2c0751dfa\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.805209 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223a79c7-9460-4b16-a65d-90e2c0751dfa-log-httpd\") pod \"223a79c7-9460-4b16-a65d-90e2c0751dfa\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.805261 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-combined-ca-bundle\") pod \"223a79c7-9460-4b16-a65d-90e2c0751dfa\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.805276 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-scripts\") pod \"223a79c7-9460-4b16-a65d-90e2c0751dfa\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.805302 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-sg-core-conf-yaml\") pod \"223a79c7-9460-4b16-a65d-90e2c0751dfa\" (UID: \"223a79c7-9460-4b16-a65d-90e2c0751dfa\") " Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.808807 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/223a79c7-9460-4b16-a65d-90e2c0751dfa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "223a79c7-9460-4b16-a65d-90e2c0751dfa" (UID: "223a79c7-9460-4b16-a65d-90e2c0751dfa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.808900 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/223a79c7-9460-4b16-a65d-90e2c0751dfa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "223a79c7-9460-4b16-a65d-90e2c0751dfa" (UID: "223a79c7-9460-4b16-a65d-90e2c0751dfa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.815142 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-scripts" (OuterVolumeSpecName: "scripts") pod "223a79c7-9460-4b16-a65d-90e2c0751dfa" (UID: "223a79c7-9460-4b16-a65d-90e2c0751dfa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.817713 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/223a79c7-9460-4b16-a65d-90e2c0751dfa-kube-api-access-8w2ql" (OuterVolumeSpecName: "kube-api-access-8w2ql") pod "223a79c7-9460-4b16-a65d-90e2c0751dfa" (UID: "223a79c7-9460-4b16-a65d-90e2c0751dfa"). InnerVolumeSpecName "kube-api-access-8w2ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.859832 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "223a79c7-9460-4b16-a65d-90e2c0751dfa" (UID: "223a79c7-9460-4b16-a65d-90e2c0751dfa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.881071 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "223a79c7-9460-4b16-a65d-90e2c0751dfa" (UID: "223a79c7-9460-4b16-a65d-90e2c0751dfa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.907358 4980 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223a79c7-9460-4b16-a65d-90e2c0751dfa-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.907396 4980 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223a79c7-9460-4b16-a65d-90e2c0751dfa-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.907409 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.907424 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.907436 4980 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.907448 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w2ql\" (UniqueName: \"kubernetes.io/projected/223a79c7-9460-4b16-a65d-90e2c0751dfa-kube-api-access-8w2ql\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.918697 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-config-data" (OuterVolumeSpecName: "config-data") pod "223a79c7-9460-4b16-a65d-90e2c0751dfa" (UID: "223a79c7-9460-4b16-a65d-90e2c0751dfa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:34 crc kubenswrapper[4980]: I0107 03:51:34.994976 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.011274 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223a79c7-9460-4b16-a65d-90e2c0751dfa-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.077502 4980 generic.go:334] "Generic (PLEG): container finished" podID="1311b795-d460-4939-8ba0-73e023eb9940" containerID="820afbd26c93cfd13e5a0ceaa61e7b597b55d586b7c8edde6a0aaca5d6833add" exitCode=0 Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.077575 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76477f569b-rknll" event={"ID":"1311b795-d460-4939-8ba0-73e023eb9940","Type":"ContainerDied","Data":"820afbd26c93cfd13e5a0ceaa61e7b597b55d586b7c8edde6a0aaca5d6833add"} Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.080064 4980 generic.go:334] "Generic (PLEG): container finished" podID="c6a8c45a-6172-461f-b674-784045e96557" containerID="96b8044404c04775a7752e59f0de2978470235d65cd7f6d9cbe067c7ad516553" exitCode=0 Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.080086 4980 generic.go:334] "Generic (PLEG): container finished" podID="c6a8c45a-6172-461f-b674-784045e96557" containerID="e9fee459e41629d1b8c14689dae010612ca9a7d04e0185ee4d9036d4542311cc" exitCode=143 Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.080113 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c6a8c45a-6172-461f-b674-784045e96557","Type":"ContainerDied","Data":"96b8044404c04775a7752e59f0de2978470235d65cd7f6d9cbe067c7ad516553"} Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.080130 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c6a8c45a-6172-461f-b674-784045e96557","Type":"ContainerDied","Data":"e9fee459e41629d1b8c14689dae010612ca9a7d04e0185ee4d9036d4542311cc"} Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.086048 4980 generic.go:334] "Generic (PLEG): container finished" podID="223a79c7-9460-4b16-a65d-90e2c0751dfa" containerID="b9934fc35eb23d126049fc2556100181967f389cadf8abbce1143ebbbccd096c" exitCode=0 Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.087086 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"223a79c7-9460-4b16-a65d-90e2c0751dfa","Type":"ContainerDied","Data":"b9934fc35eb23d126049fc2556100181967f389cadf8abbce1143ebbbccd096c"} Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.087210 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"223a79c7-9460-4b16-a65d-90e2c0751dfa","Type":"ContainerDied","Data":"4cc923c5cc1b21b84c006745109cc17d19e738ad0fcb791843e82f683df2797b"} Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.087230 4980 scope.go:117] "RemoveContainer" containerID="401f24e9a750e5f7ae969742e07a4a2733e2ed7488d92431901b98f0d22bb1ca" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.092762 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.102736 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4d3da149-0a31-4679-a580-69cc3d54ac1e","Type":"ContainerStarted","Data":"f20caa55b58916282c09419f94d749f4611a29ec253caf31ed69882c63402372"} Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.148217 4980 scope.go:117] "RemoveContainer" containerID="b82f3d1bb2080ec8ea42e235419727baaf200a445ac848a5ba89f919bdb7d666" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.172343 4980 scope.go:117] "RemoveContainer" containerID="b9934fc35eb23d126049fc2556100181967f389cadf8abbce1143ebbbccd096c" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.332874 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.343727 4980 scope.go:117] "RemoveContainer" containerID="401f24e9a750e5f7ae969742e07a4a2733e2ed7488d92431901b98f0d22bb1ca" Jan 07 03:51:35 crc kubenswrapper[4980]: E0107 03:51:35.354775 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"401f24e9a750e5f7ae969742e07a4a2733e2ed7488d92431901b98f0d22bb1ca\": container with ID starting with 401f24e9a750e5f7ae969742e07a4a2733e2ed7488d92431901b98f0d22bb1ca not found: ID does not exist" containerID="401f24e9a750e5f7ae969742e07a4a2733e2ed7488d92431901b98f0d22bb1ca" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.354857 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"401f24e9a750e5f7ae969742e07a4a2733e2ed7488d92431901b98f0d22bb1ca"} err="failed to get container status \"401f24e9a750e5f7ae969742e07a4a2733e2ed7488d92431901b98f0d22bb1ca\": rpc error: code = NotFound desc = could not find container \"401f24e9a750e5f7ae969742e07a4a2733e2ed7488d92431901b98f0d22bb1ca\": container with ID starting with 401f24e9a750e5f7ae969742e07a4a2733e2ed7488d92431901b98f0d22bb1ca not found: ID does not exist" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.354895 4980 scope.go:117] "RemoveContainer" containerID="b82f3d1bb2080ec8ea42e235419727baaf200a445ac848a5ba89f919bdb7d666" Jan 07 03:51:35 crc kubenswrapper[4980]: E0107 03:51:35.364816 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b82f3d1bb2080ec8ea42e235419727baaf200a445ac848a5ba89f919bdb7d666\": container with ID starting with b82f3d1bb2080ec8ea42e235419727baaf200a445ac848a5ba89f919bdb7d666 not found: ID does not exist" containerID="b82f3d1bb2080ec8ea42e235419727baaf200a445ac848a5ba89f919bdb7d666" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.364881 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82f3d1bb2080ec8ea42e235419727baaf200a445ac848a5ba89f919bdb7d666"} err="failed to get container status \"b82f3d1bb2080ec8ea42e235419727baaf200a445ac848a5ba89f919bdb7d666\": rpc error: code = NotFound desc = could not find container \"b82f3d1bb2080ec8ea42e235419727baaf200a445ac848a5ba89f919bdb7d666\": container with ID starting with b82f3d1bb2080ec8ea42e235419727baaf200a445ac848a5ba89f919bdb7d666 not found: ID does not exist" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.364918 4980 scope.go:117] "RemoveContainer" containerID="b9934fc35eb23d126049fc2556100181967f389cadf8abbce1143ebbbccd096c" Jan 07 03:51:35 crc kubenswrapper[4980]: E0107 03:51:35.367694 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9934fc35eb23d126049fc2556100181967f389cadf8abbce1143ebbbccd096c\": container with ID starting with b9934fc35eb23d126049fc2556100181967f389cadf8abbce1143ebbbccd096c not found: ID does not exist" containerID="b9934fc35eb23d126049fc2556100181967f389cadf8abbce1143ebbbccd096c" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.367756 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9934fc35eb23d126049fc2556100181967f389cadf8abbce1143ebbbccd096c"} err="failed to get container status \"b9934fc35eb23d126049fc2556100181967f389cadf8abbce1143ebbbccd096c\": rpc error: code = NotFound desc = could not find container \"b9934fc35eb23d126049fc2556100181967f389cadf8abbce1143ebbbccd096c\": container with ID starting with b9934fc35eb23d126049fc2556100181967f389cadf8abbce1143ebbbccd096c not found: ID does not exist" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.371466 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.394897 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.404180 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.417084 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:51:35 crc kubenswrapper[4980]: E0107 03:51:35.417629 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="223a79c7-9460-4b16-a65d-90e2c0751dfa" containerName="ceilometer-notification-agent" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.417664 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="223a79c7-9460-4b16-a65d-90e2c0751dfa" containerName="ceilometer-notification-agent" Jan 07 03:51:35 crc kubenswrapper[4980]: E0107 03:51:35.417685 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a8c45a-6172-461f-b674-784045e96557" containerName="cinder-api-log" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.417695 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a8c45a-6172-461f-b674-784045e96557" containerName="cinder-api-log" Jan 07 03:51:35 crc kubenswrapper[4980]: E0107 03:51:35.417710 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="223a79c7-9460-4b16-a65d-90e2c0751dfa" containerName="proxy-httpd" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.417718 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="223a79c7-9460-4b16-a65d-90e2c0751dfa" containerName="proxy-httpd" Jan 07 03:51:35 crc kubenswrapper[4980]: E0107 03:51:35.417727 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60adb166-cc77-4a92-833b-59621ae07155" containerName="init" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.417735 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="60adb166-cc77-4a92-833b-59621ae07155" containerName="init" Jan 07 03:51:35 crc kubenswrapper[4980]: E0107 03:51:35.417756 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ac366d-3498-4985-b9d8-5d145f5c3048" containerName="horizon" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.417764 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ac366d-3498-4985-b9d8-5d145f5c3048" containerName="horizon" Jan 07 03:51:35 crc kubenswrapper[4980]: E0107 03:51:35.417776 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1311b795-d460-4939-8ba0-73e023eb9940" containerName="barbican-api-log" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.417783 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="1311b795-d460-4939-8ba0-73e023eb9940" containerName="barbican-api-log" Jan 07 03:51:35 crc kubenswrapper[4980]: E0107 03:51:35.417793 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a8c45a-6172-461f-b674-784045e96557" containerName="cinder-api" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.417801 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a8c45a-6172-461f-b674-784045e96557" containerName="cinder-api" Jan 07 03:51:35 crc kubenswrapper[4980]: E0107 03:51:35.417810 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d74f1e6-2229-4ff7-8c80-b12d09285da4" containerName="horizon-log" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.417817 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d74f1e6-2229-4ff7-8c80-b12d09285da4" containerName="horizon-log" Jan 07 03:51:35 crc kubenswrapper[4980]: E0107 03:51:35.417830 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1311b795-d460-4939-8ba0-73e023eb9940" containerName="barbican-api" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.417837 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="1311b795-d460-4939-8ba0-73e023eb9940" containerName="barbican-api" Jan 07 03:51:35 crc kubenswrapper[4980]: E0107 03:51:35.417853 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="223a79c7-9460-4b16-a65d-90e2c0751dfa" containerName="sg-core" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.417860 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="223a79c7-9460-4b16-a65d-90e2c0751dfa" containerName="sg-core" Jan 07 03:51:35 crc kubenswrapper[4980]: E0107 03:51:35.417881 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c5e78b-d013-4936-a6a0-639aff10ff45" containerName="horizon" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.417888 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c5e78b-d013-4936-a6a0-639aff10ff45" containerName="horizon" Jan 07 03:51:35 crc kubenswrapper[4980]: E0107 03:51:35.417902 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60adb166-cc77-4a92-833b-59621ae07155" containerName="dnsmasq-dns" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.417911 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="60adb166-cc77-4a92-833b-59621ae07155" containerName="dnsmasq-dns" Jan 07 03:51:35 crc kubenswrapper[4980]: E0107 03:51:35.417921 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d74f1e6-2229-4ff7-8c80-b12d09285da4" containerName="horizon" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.417930 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d74f1e6-2229-4ff7-8c80-b12d09285da4" containerName="horizon" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.418301 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="1311b795-d460-4939-8ba0-73e023eb9940" containerName="barbican-api" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.418320 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="02ac366d-3498-4985-b9d8-5d145f5c3048" containerName="horizon" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.418330 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c5e78b-d013-4936-a6a0-639aff10ff45" containerName="horizon" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.418344 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d74f1e6-2229-4ff7-8c80-b12d09285da4" containerName="horizon-log" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.418356 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="223a79c7-9460-4b16-a65d-90e2c0751dfa" containerName="proxy-httpd" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.418365 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d74f1e6-2229-4ff7-8c80-b12d09285da4" containerName="horizon" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.418378 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a8c45a-6172-461f-b674-784045e96557" containerName="cinder-api" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.418397 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a8c45a-6172-461f-b674-784045e96557" containerName="cinder-api-log" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.418410 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="223a79c7-9460-4b16-a65d-90e2c0751dfa" containerName="ceilometer-notification-agent" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.418423 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="60adb166-cc77-4a92-833b-59621ae07155" containerName="dnsmasq-dns" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.418434 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="223a79c7-9460-4b16-a65d-90e2c0751dfa" containerName="sg-core" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.418443 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="1311b795-d460-4939-8ba0-73e023eb9940" containerName="barbican-api-log" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.425762 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.431703 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.431893 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.451428 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.453541 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt9qx\" (UniqueName: \"kubernetes.io/projected/1311b795-d460-4939-8ba0-73e023eb9940-kube-api-access-dt9qx\") pod \"1311b795-d460-4939-8ba0-73e023eb9940\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.453636 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-config-data\") pod \"c6a8c45a-6172-461f-b674-784045e96557\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.453688 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1311b795-d460-4939-8ba0-73e023eb9940-logs\") pod \"1311b795-d460-4939-8ba0-73e023eb9940\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.453804 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1311b795-d460-4939-8ba0-73e023eb9940-combined-ca-bundle\") pod \"1311b795-d460-4939-8ba0-73e023eb9940\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.453859 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-config-data-custom\") pod \"c6a8c45a-6172-461f-b674-784045e96557\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.453891 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6a8c45a-6172-461f-b674-784045e96557-logs\") pod \"c6a8c45a-6172-461f-b674-784045e96557\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.453927 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6a8c45a-6172-461f-b674-784045e96557-etc-machine-id\") pod \"c6a8c45a-6172-461f-b674-784045e96557\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.453951 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-combined-ca-bundle\") pod \"c6a8c45a-6172-461f-b674-784045e96557\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.453975 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-scripts\") pod \"c6a8c45a-6172-461f-b674-784045e96557\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.453996 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1311b795-d460-4939-8ba0-73e023eb9940-config-data-custom\") pod \"1311b795-d460-4939-8ba0-73e023eb9940\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.454038 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1311b795-d460-4939-8ba0-73e023eb9940-config-data\") pod \"1311b795-d460-4939-8ba0-73e023eb9940\" (UID: \"1311b795-d460-4939-8ba0-73e023eb9940\") " Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.454091 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52ztl\" (UniqueName: \"kubernetes.io/projected/c6a8c45a-6172-461f-b674-784045e96557-kube-api-access-52ztl\") pod \"c6a8c45a-6172-461f-b674-784045e96557\" (UID: \"c6a8c45a-6172-461f-b674-784045e96557\") " Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.456445 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6a8c45a-6172-461f-b674-784045e96557-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c6a8c45a-6172-461f-b674-784045e96557" (UID: "c6a8c45a-6172-461f-b674-784045e96557"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.457448 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6a8c45a-6172-461f-b674-784045e96557-logs" (OuterVolumeSpecName: "logs") pod "c6a8c45a-6172-461f-b674-784045e96557" (UID: "c6a8c45a-6172-461f-b674-784045e96557"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.457516 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1311b795-d460-4939-8ba0-73e023eb9940-logs" (OuterVolumeSpecName: "logs") pod "1311b795-d460-4939-8ba0-73e023eb9940" (UID: "1311b795-d460-4939-8ba0-73e023eb9940"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.470377 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1311b795-d460-4939-8ba0-73e023eb9940-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1311b795-d460-4939-8ba0-73e023eb9940" (UID: "1311b795-d460-4939-8ba0-73e023eb9940"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.470440 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a8c45a-6172-461f-b674-784045e96557-kube-api-access-52ztl" (OuterVolumeSpecName: "kube-api-access-52ztl") pod "c6a8c45a-6172-461f-b674-784045e96557" (UID: "c6a8c45a-6172-461f-b674-784045e96557"). InnerVolumeSpecName "kube-api-access-52ztl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.471380 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1311b795-d460-4939-8ba0-73e023eb9940-kube-api-access-dt9qx" (OuterVolumeSpecName: "kube-api-access-dt9qx") pod "1311b795-d460-4939-8ba0-73e023eb9940" (UID: "1311b795-d460-4939-8ba0-73e023eb9940"). InnerVolumeSpecName "kube-api-access-dt9qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.482059 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c6a8c45a-6172-461f-b674-784045e96557" (UID: "c6a8c45a-6172-461f-b674-784045e96557"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.500138 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-scripts" (OuterVolumeSpecName: "scripts") pod "c6a8c45a-6172-461f-b674-784045e96557" (UID: "c6a8c45a-6172-461f-b674-784045e96557"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.516707 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1311b795-d460-4939-8ba0-73e023eb9940-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1311b795-d460-4939-8ba0-73e023eb9940" (UID: "1311b795-d460-4939-8ba0-73e023eb9940"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.518774 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6a8c45a-6172-461f-b674-784045e96557" (UID: "c6a8c45a-6172-461f-b674-784045e96557"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.532639 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-config-data" (OuterVolumeSpecName: "config-data") pod "c6a8c45a-6172-461f-b674-784045e96557" (UID: "c6a8c45a-6172-461f-b674-784045e96557"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.540721 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1311b795-d460-4939-8ba0-73e023eb9940-config-data" (OuterVolumeSpecName: "config-data") pod "1311b795-d460-4939-8ba0-73e023eb9940" (UID: "1311b795-d460-4939-8ba0-73e023eb9940"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.555915 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.555967 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-config-data\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.556042 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-scripts\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.556081 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff693cd6-6951-4953-80e4-971c4386f05f-log-httpd\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.556118 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff693cd6-6951-4953-80e4-971c4386f05f-run-httpd\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.556138 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.556168 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4lp7\" (UniqueName: \"kubernetes.io/projected/ff693cd6-6951-4953-80e4-971c4386f05f-kube-api-access-p4lp7\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.556219 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1311b795-d460-4939-8ba0-73e023eb9940-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.556230 4980 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.556239 4980 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6a8c45a-6172-461f-b674-784045e96557-logs\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.556248 4980 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6a8c45a-6172-461f-b674-784045e96557-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.556256 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.556264 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.556274 4980 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1311b795-d460-4939-8ba0-73e023eb9940-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.556285 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1311b795-d460-4939-8ba0-73e023eb9940-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.556293 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52ztl\" (UniqueName: \"kubernetes.io/projected/c6a8c45a-6172-461f-b674-784045e96557-kube-api-access-52ztl\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.556303 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt9qx\" (UniqueName: \"kubernetes.io/projected/1311b795-d460-4939-8ba0-73e023eb9940-kube-api-access-dt9qx\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.556313 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a8c45a-6172-461f-b674-784045e96557-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.557095 4980 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1311b795-d460-4939-8ba0-73e023eb9940-logs\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.658242 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-scripts\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.658313 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff693cd6-6951-4953-80e4-971c4386f05f-log-httpd\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.658333 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff693cd6-6951-4953-80e4-971c4386f05f-run-httpd\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.658350 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.658379 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4lp7\" (UniqueName: \"kubernetes.io/projected/ff693cd6-6951-4953-80e4-971c4386f05f-kube-api-access-p4lp7\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.658410 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.658433 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-config-data\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.659514 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff693cd6-6951-4953-80e4-971c4386f05f-run-httpd\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.659914 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff693cd6-6951-4953-80e4-971c4386f05f-log-httpd\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.663384 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.663716 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-config-data\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.664685 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.678295 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-scripts\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.685278 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4lp7\" (UniqueName: \"kubernetes.io/projected/ff693cd6-6951-4953-80e4-971c4386f05f-kube-api-access-p4lp7\") pod \"ceilometer-0\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.752716 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.772611 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="223a79c7-9460-4b16-a65d-90e2c0751dfa" path="/var/lib/kubelet/pods/223a79c7-9460-4b16-a65d-90e2c0751dfa/volumes" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.773503 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d74f1e6-2229-4ff7-8c80-b12d09285da4" path="/var/lib/kubelet/pods/9d74f1e6-2229-4ff7-8c80-b12d09285da4/volumes" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.817670 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5ffd847cb9-kf6tb" Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.905131 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7fd46fb64b-j8mm8"] Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.909111 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7fd46fb64b-j8mm8" podUID="c323500f-74e7-47cd-b4fd-ab15b5fedfb5" containerName="neutron-httpd" containerID="cri-o://ebd13c614b28d044d8712f567fe7b9441d275aea157ae3f8244bbfd9e0ca64b5" gracePeriod=30 Jan 07 03:51:35 crc kubenswrapper[4980]: I0107 03:51:35.908920 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7fd46fb64b-j8mm8" podUID="c323500f-74e7-47cd-b4fd-ab15b5fedfb5" containerName="neutron-api" containerID="cri-o://ab5fcbb72fb4bcd5c79c1c11194a2d5ce3c022d9cb9b6b93f3ce22715fa6a6eb" gracePeriod=30 Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.123880 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c6a8c45a-6172-461f-b674-784045e96557","Type":"ContainerDied","Data":"b193028c29162c271c1342655620054858e6c84aa14dab045c13526ed8154f77"} Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.124269 4980 scope.go:117] "RemoveContainer" containerID="96b8044404c04775a7752e59f0de2978470235d65cd7f6d9cbe067c7ad516553" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.127788 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.137919 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4d3da149-0a31-4679-a580-69cc3d54ac1e","Type":"ContainerStarted","Data":"7aaacb682e147dcdc05f311aca2c9bf7c6ef64252da829e095e1e1fa139e7ece"} Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.145830 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76477f569b-rknll" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.146484 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76477f569b-rknll" event={"ID":"1311b795-d460-4939-8ba0-73e023eb9940","Type":"ContainerDied","Data":"e1b67aa4cabe08f066f474729047395a246446dd5140e81462594c0760c28896"} Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.169254 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.5513756579999995 podStartE2EDuration="7.169233856s" podCreationTimestamp="2026-01-07 03:51:29 +0000 UTC" firstStartedPulling="2026-01-07 03:51:31.922674151 +0000 UTC m=+1138.488368886" lastFinishedPulling="2026-01-07 03:51:33.540532349 +0000 UTC m=+1140.106227084" observedRunningTime="2026-01-07 03:51:36.158606415 +0000 UTC m=+1142.724301150" watchObservedRunningTime="2026-01-07 03:51:36.169233856 +0000 UTC m=+1142.734928591" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.204220 4980 scope.go:117] "RemoveContainer" containerID="e9fee459e41629d1b8c14689dae010612ca9a7d04e0185ee4d9036d4542311cc" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.235669 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.241130 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.258170 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.259671 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.266911 4980 scope.go:117] "RemoveContainer" containerID="820afbd26c93cfd13e5a0ceaa61e7b597b55d586b7c8edde6a0aaca5d6833add" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.267420 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.267621 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.267730 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.268261 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-76477f569b-rknll"] Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.288983 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-76477f569b-rknll"] Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.295619 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.316721 4980 scope.go:117] "RemoveContainer" containerID="0e86a2e894b7bfba2df7a16643e45d451153e319d788ef70232eafdbe5784175" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.318776 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.377138 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a441a7ef-1973-4f21-8ec1-834904f5bcf7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.377211 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a441a7ef-1973-4f21-8ec1-834904f5bcf7-config-data\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.377272 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a441a7ef-1973-4f21-8ec1-834904f5bcf7-logs\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.377301 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms5fn\" (UniqueName: \"kubernetes.io/projected/a441a7ef-1973-4f21-8ec1-834904f5bcf7-kube-api-access-ms5fn\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.377359 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a441a7ef-1973-4f21-8ec1-834904f5bcf7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.377384 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a441a7ef-1973-4f21-8ec1-834904f5bcf7-config-data-custom\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.377464 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a441a7ef-1973-4f21-8ec1-834904f5bcf7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.377525 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a441a7ef-1973-4f21-8ec1-834904f5bcf7-scripts\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.377540 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a441a7ef-1973-4f21-8ec1-834904f5bcf7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.479704 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a441a7ef-1973-4f21-8ec1-834904f5bcf7-scripts\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.481407 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a441a7ef-1973-4f21-8ec1-834904f5bcf7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.481517 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a441a7ef-1973-4f21-8ec1-834904f5bcf7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.481601 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a441a7ef-1973-4f21-8ec1-834904f5bcf7-config-data\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.481694 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a441a7ef-1973-4f21-8ec1-834904f5bcf7-logs\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.481758 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms5fn\" (UniqueName: \"kubernetes.io/projected/a441a7ef-1973-4f21-8ec1-834904f5bcf7-kube-api-access-ms5fn\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.481828 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a441a7ef-1973-4f21-8ec1-834904f5bcf7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.481882 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a441a7ef-1973-4f21-8ec1-834904f5bcf7-config-data-custom\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.482238 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a441a7ef-1973-4f21-8ec1-834904f5bcf7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.482512 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a441a7ef-1973-4f21-8ec1-834904f5bcf7-logs\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.485658 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a441a7ef-1973-4f21-8ec1-834904f5bcf7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.486819 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a441a7ef-1973-4f21-8ec1-834904f5bcf7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.491038 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a441a7ef-1973-4f21-8ec1-834904f5bcf7-scripts\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.491162 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a441a7ef-1973-4f21-8ec1-834904f5bcf7-config-data-custom\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.491432 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a441a7ef-1973-4f21-8ec1-834904f5bcf7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.493353 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a441a7ef-1973-4f21-8ec1-834904f5bcf7-config-data\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.505959 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a441a7ef-1973-4f21-8ec1-834904f5bcf7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.510503 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms5fn\" (UniqueName: \"kubernetes.io/projected/a441a7ef-1973-4f21-8ec1-834904f5bcf7-kube-api-access-ms5fn\") pod \"cinder-api-0\" (UID: \"a441a7ef-1973-4f21-8ec1-834904f5bcf7\") " pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.543507 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.543605 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.543663 4980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.544593 4980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40fed48537fb4dd350c71735c8360a409552809cda596d45f1ede2d146d2801a"} pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.544669 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" containerID="cri-o://40fed48537fb4dd350c71735c8360a409552809cda596d45f1ede2d146d2801a" gracePeriod=600 Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.597318 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 07 03:51:36 crc kubenswrapper[4980]: I0107 03:51:36.638420 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:51:37 crc kubenswrapper[4980]: I0107 03:51:37.092018 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 07 03:51:37 crc kubenswrapper[4980]: W0107 03:51:37.096905 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda441a7ef_1973_4f21_8ec1_834904f5bcf7.slice/crio-aeda08f73e4d0ed05cec7ebb8aeb608c5226b95b4ab4f818cae96ded259b42bb WatchSource:0}: Error finding container aeda08f73e4d0ed05cec7ebb8aeb608c5226b95b4ab4f818cae96ded259b42bb: Status 404 returned error can't find the container with id aeda08f73e4d0ed05cec7ebb8aeb608c5226b95b4ab4f818cae96ded259b42bb Jan 07 03:51:37 crc kubenswrapper[4980]: I0107 03:51:37.162995 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a441a7ef-1973-4f21-8ec1-834904f5bcf7","Type":"ContainerStarted","Data":"aeda08f73e4d0ed05cec7ebb8aeb608c5226b95b4ab4f818cae96ded259b42bb"} Jan 07 03:51:37 crc kubenswrapper[4980]: I0107 03:51:37.176413 4980 generic.go:334] "Generic (PLEG): container finished" podID="c323500f-74e7-47cd-b4fd-ab15b5fedfb5" containerID="ebd13c614b28d044d8712f567fe7b9441d275aea157ae3f8244bbfd9e0ca64b5" exitCode=0 Jan 07 03:51:37 crc kubenswrapper[4980]: I0107 03:51:37.176490 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fd46fb64b-j8mm8" event={"ID":"c323500f-74e7-47cd-b4fd-ab15b5fedfb5","Type":"ContainerDied","Data":"ebd13c614b28d044d8712f567fe7b9441d275aea157ae3f8244bbfd9e0ca64b5"} Jan 07 03:51:37 crc kubenswrapper[4980]: I0107 03:51:37.178993 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff693cd6-6951-4953-80e4-971c4386f05f","Type":"ContainerStarted","Data":"2d9e659b168460144e1f078a6e30097398fdaf9ea91b360a587bac877993103e"} Jan 07 03:51:37 crc kubenswrapper[4980]: I0107 03:51:37.189591 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerID="40fed48537fb4dd350c71735c8360a409552809cda596d45f1ede2d146d2801a" exitCode=0 Jan 07 03:51:37 crc kubenswrapper[4980]: I0107 03:51:37.189856 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerDied","Data":"40fed48537fb4dd350c71735c8360a409552809cda596d45f1ede2d146d2801a"} Jan 07 03:51:37 crc kubenswrapper[4980]: I0107 03:51:37.189994 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"9509c26a45d342c7045d51ecdcc5dc207e383364d36ad4c45d95a6527388485b"} Jan 07 03:51:37 crc kubenswrapper[4980]: I0107 03:51:37.190014 4980 scope.go:117] "RemoveContainer" containerID="2aeadb84272b7976dfcd584a184be5b65ae16f36aa28ca68277c09134c73d7e7" Jan 07 03:51:37 crc kubenswrapper[4980]: I0107 03:51:37.326410 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-565d4f6c4b-gj6mz" Jan 07 03:51:37 crc kubenswrapper[4980]: I0107 03:51:37.387825 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-547b7ddd64-7hclw"] Jan 07 03:51:37 crc kubenswrapper[4980]: I0107 03:51:37.388058 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-547b7ddd64-7hclw" podUID="a806c806-4d43-4a04-aefa-0544f2a5175f" containerName="horizon-log" containerID="cri-o://13a8448e218af0221ddaf7496f2b718343f57d58d232fbf8f3b7827312911de5" gracePeriod=30 Jan 07 03:51:37 crc kubenswrapper[4980]: I0107 03:51:37.388223 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-547b7ddd64-7hclw" podUID="a806c806-4d43-4a04-aefa-0544f2a5175f" containerName="horizon" containerID="cri-o://cd80c28616f3a7a48c0607d28c1a8354eb753e72520c1bfc5ee438419f2ad8e7" gracePeriod=30 Jan 07 03:51:37 crc kubenswrapper[4980]: I0107 03:51:37.753149 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1311b795-d460-4939-8ba0-73e023eb9940" path="/var/lib/kubelet/pods/1311b795-d460-4939-8ba0-73e023eb9940/volumes" Jan 07 03:51:37 crc kubenswrapper[4980]: I0107 03:51:37.754136 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6a8c45a-6172-461f-b674-784045e96557" path="/var/lib/kubelet/pods/c6a8c45a-6172-461f-b674-784045e96557/volumes" Jan 07 03:51:38 crc kubenswrapper[4980]: I0107 03:51:38.201987 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a441a7ef-1973-4f21-8ec1-834904f5bcf7","Type":"ContainerStarted","Data":"57b683e460c0e7431d1f82eabb6d6d9b217d56fe71665a5f1e219498faaf65ac"} Jan 07 03:51:38 crc kubenswrapper[4980]: I0107 03:51:38.206040 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff693cd6-6951-4953-80e4-971c4386f05f","Type":"ContainerStarted","Data":"1c9809ff29802c243ea3d07804853713d994898880483c553897f939014c9c27"} Jan 07 03:51:38 crc kubenswrapper[4980]: I0107 03:51:38.206101 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff693cd6-6951-4953-80e4-971c4386f05f","Type":"ContainerStarted","Data":"44e7e471ddd3b3cd348d87b4fb73275fa461fbadc1265ddf1e6cb385891c4984"} Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.235836 4980 generic.go:334] "Generic (PLEG): container finished" podID="c323500f-74e7-47cd-b4fd-ab15b5fedfb5" containerID="ab5fcbb72fb4bcd5c79c1c11194a2d5ce3c022d9cb9b6b93f3ce22715fa6a6eb" exitCode=0 Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.235910 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fd46fb64b-j8mm8" event={"ID":"c323500f-74e7-47cd-b4fd-ab15b5fedfb5","Type":"ContainerDied","Data":"ab5fcbb72fb4bcd5c79c1c11194a2d5ce3c022d9cb9b6b93f3ce22715fa6a6eb"} Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.239825 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff693cd6-6951-4953-80e4-971c4386f05f","Type":"ContainerStarted","Data":"a3224392cbcfb343340b119966368e19797a5eedd518d899919e1e45cd1ccd1b"} Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.241838 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a441a7ef-1973-4f21-8ec1-834904f5bcf7","Type":"ContainerStarted","Data":"512d5a1ac8e40cc5dc3fbc8aaa86782fdd0c6fd1c6927122293a0a943bda954b"} Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.243618 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.281379 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.281362117 podStartE2EDuration="3.281362117s" podCreationTimestamp="2026-01-07 03:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:51:39.276565597 +0000 UTC m=+1145.842260332" watchObservedRunningTime="2026-01-07 03:51:39.281362117 +0000 UTC m=+1145.847056852" Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.378271 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.405408 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.472718 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.527111 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-9vkwn"] Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.527372 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" podUID="ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b" containerName="dnsmasq-dns" containerID="cri-o://f2405c9120641f64f847bd724d469d4e8bb14fdd74f45bc0f05ef7a6ffa731a2" gracePeriod=10 Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.560990 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-ovndb-tls-certs\") pod \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.561039 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-config\") pod \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.561270 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-combined-ca-bundle\") pod \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.561334 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6c47\" (UniqueName: \"kubernetes.io/projected/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-kube-api-access-c6c47\") pod \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.561376 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-httpd-config\") pod \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\" (UID: \"c323500f-74e7-47cd-b4fd-ab15b5fedfb5\") " Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.568441 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-kube-api-access-c6c47" (OuterVolumeSpecName: "kube-api-access-c6c47") pod "c323500f-74e7-47cd-b4fd-ab15b5fedfb5" (UID: "c323500f-74e7-47cd-b4fd-ab15b5fedfb5"). InnerVolumeSpecName "kube-api-access-c6c47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.580302 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c323500f-74e7-47cd-b4fd-ab15b5fedfb5" (UID: "c323500f-74e7-47cd-b4fd-ab15b5fedfb5"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.622934 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c323500f-74e7-47cd-b4fd-ab15b5fedfb5" (UID: "c323500f-74e7-47cd-b4fd-ab15b5fedfb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.664989 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-config" (OuterVolumeSpecName: "config") pod "c323500f-74e7-47cd-b4fd-ab15b5fedfb5" (UID: "c323500f-74e7-47cd-b4fd-ab15b5fedfb5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.667182 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.667206 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.667216 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6c47\" (UniqueName: \"kubernetes.io/projected/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-kube-api-access-c6c47\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.667227 4980 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.676908 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c323500f-74e7-47cd-b4fd-ab15b5fedfb5" (UID: "c323500f-74e7-47cd-b4fd-ab15b5fedfb5"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.728523 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 07 03:51:39 crc kubenswrapper[4980]: I0107 03:51:39.769522 4980 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c323500f-74e7-47cd-b4fd-ab15b5fedfb5-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.021405 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.177623 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-dns-svc\") pod \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.177683 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db5j7\" (UniqueName: \"kubernetes.io/projected/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-kube-api-access-db5j7\") pod \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.177739 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-dns-swift-storage-0\") pod \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.177853 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-config\") pod \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.177872 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-ovsdbserver-nb\") pod \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.177981 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-ovsdbserver-sb\") pod \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.188338 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-kube-api-access-db5j7" (OuterVolumeSpecName: "kube-api-access-db5j7") pod "ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b" (UID: "ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b"). InnerVolumeSpecName "kube-api-access-db5j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.236616 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b" (UID: "ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.238440 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b" (UID: "ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.243224 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b" (UID: "ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:40 crc kubenswrapper[4980]: E0107 03:51:40.252814 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-ovsdbserver-sb podName:ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b nodeName:}" failed. No retries permitted until 2026-01-07 03:51:40.752761645 +0000 UTC m=+1147.318456580 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-sb" (UniqueName: "kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-ovsdbserver-sb") pod "ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b" (UID: "ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b") : error deleting /var/lib/kubelet/pods/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b/volume-subpaths: remove /var/lib/kubelet/pods/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b/volume-subpaths: no such file or directory Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.252967 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-config" (OuterVolumeSpecName: "config") pod "ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b" (UID: "ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.256450 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b" containerID="f2405c9120641f64f847bd724d469d4e8bb14fdd74f45bc0f05ef7a6ffa731a2" exitCode=0 Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.256531 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.256545 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" event={"ID":"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b","Type":"ContainerDied","Data":"f2405c9120641f64f847bd724d469d4e8bb14fdd74f45bc0f05ef7a6ffa731a2"} Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.256622 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-9vkwn" event={"ID":"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b","Type":"ContainerDied","Data":"1136700aa969fc31fdfeaee5518c50abb54e6b49f17a5c2526a5c9bf0aa6fb02"} Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.256641 4980 scope.go:117] "RemoveContainer" containerID="f2405c9120641f64f847bd724d469d4e8bb14fdd74f45bc0f05ef7a6ffa731a2" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.259257 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fd46fb64b-j8mm8" event={"ID":"c323500f-74e7-47cd-b4fd-ab15b5fedfb5","Type":"ContainerDied","Data":"78f09e661230db9ab52a61557632ed4a9e62ccca5a7089036a8d86482dad935c"} Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.259335 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fd46fb64b-j8mm8" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.280540 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.280585 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.280596 4980 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.280606 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db5j7\" (UniqueName: \"kubernetes.io/projected/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-kube-api-access-db5j7\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.280615 4980 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.307650 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.343794 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7fd46fb64b-j8mm8"] Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.347809 4980 scope.go:117] "RemoveContainer" containerID="9c25a55485b8782ca9db2b3b34ed480c0b8b71457dfed49347997f1804899fb3" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.350770 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7fd46fb64b-j8mm8"] Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.370179 4980 scope.go:117] "RemoveContainer" containerID="f2405c9120641f64f847bd724d469d4e8bb14fdd74f45bc0f05ef7a6ffa731a2" Jan 07 03:51:40 crc kubenswrapper[4980]: E0107 03:51:40.371545 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2405c9120641f64f847bd724d469d4e8bb14fdd74f45bc0f05ef7a6ffa731a2\": container with ID starting with f2405c9120641f64f847bd724d469d4e8bb14fdd74f45bc0f05ef7a6ffa731a2 not found: ID does not exist" containerID="f2405c9120641f64f847bd724d469d4e8bb14fdd74f45bc0f05ef7a6ffa731a2" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.371586 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2405c9120641f64f847bd724d469d4e8bb14fdd74f45bc0f05ef7a6ffa731a2"} err="failed to get container status \"f2405c9120641f64f847bd724d469d4e8bb14fdd74f45bc0f05ef7a6ffa731a2\": rpc error: code = NotFound desc = could not find container \"f2405c9120641f64f847bd724d469d4e8bb14fdd74f45bc0f05ef7a6ffa731a2\": container with ID starting with f2405c9120641f64f847bd724d469d4e8bb14fdd74f45bc0f05ef7a6ffa731a2 not found: ID does not exist" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.371608 4980 scope.go:117] "RemoveContainer" containerID="9c25a55485b8782ca9db2b3b34ed480c0b8b71457dfed49347997f1804899fb3" Jan 07 03:51:40 crc kubenswrapper[4980]: E0107 03:51:40.376239 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c25a55485b8782ca9db2b3b34ed480c0b8b71457dfed49347997f1804899fb3\": container with ID starting with 9c25a55485b8782ca9db2b3b34ed480c0b8b71457dfed49347997f1804899fb3 not found: ID does not exist" containerID="9c25a55485b8782ca9db2b3b34ed480c0b8b71457dfed49347997f1804899fb3" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.376265 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c25a55485b8782ca9db2b3b34ed480c0b8b71457dfed49347997f1804899fb3"} err="failed to get container status \"9c25a55485b8782ca9db2b3b34ed480c0b8b71457dfed49347997f1804899fb3\": rpc error: code = NotFound desc = could not find container \"9c25a55485b8782ca9db2b3b34ed480c0b8b71457dfed49347997f1804899fb3\": container with ID starting with 9c25a55485b8782ca9db2b3b34ed480c0b8b71457dfed49347997f1804899fb3 not found: ID does not exist" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.376279 4980 scope.go:117] "RemoveContainer" containerID="ebd13c614b28d044d8712f567fe7b9441d275aea157ae3f8244bbfd9e0ca64b5" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.406062 4980 scope.go:117] "RemoveContainer" containerID="ab5fcbb72fb4bcd5c79c1c11194a2d5ce3c022d9cb9b6b93f3ce22715fa6a6eb" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.791037 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-ovsdbserver-sb\") pod \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\" (UID: \"ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b\") " Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.791587 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b" (UID: "ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.893638 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.894508 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-9vkwn"] Jan 07 03:51:40 crc kubenswrapper[4980]: I0107 03:51:40.902291 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-9vkwn"] Jan 07 03:51:41 crc kubenswrapper[4980]: I0107 03:51:41.279413 4980 generic.go:334] "Generic (PLEG): container finished" podID="a806c806-4d43-4a04-aefa-0544f2a5175f" containerID="cd80c28616f3a7a48c0607d28c1a8354eb753e72520c1bfc5ee438419f2ad8e7" exitCode=0 Jan 07 03:51:41 crc kubenswrapper[4980]: I0107 03:51:41.279499 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547b7ddd64-7hclw" event={"ID":"a806c806-4d43-4a04-aefa-0544f2a5175f","Type":"ContainerDied","Data":"cd80c28616f3a7a48c0607d28c1a8354eb753e72520c1bfc5ee438419f2ad8e7"} Jan 07 03:51:41 crc kubenswrapper[4980]: I0107 03:51:41.284455 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4d3da149-0a31-4679-a580-69cc3d54ac1e" containerName="cinder-scheduler" containerID="cri-o://f20caa55b58916282c09419f94d749f4611a29ec253caf31ed69882c63402372" gracePeriod=30 Jan 07 03:51:41 crc kubenswrapper[4980]: I0107 03:51:41.286014 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4d3da149-0a31-4679-a580-69cc3d54ac1e" containerName="probe" containerID="cri-o://7aaacb682e147dcdc05f311aca2c9bf7c6ef64252da829e095e1e1fa139e7ece" gracePeriod=30 Jan 07 03:51:41 crc kubenswrapper[4980]: I0107 03:51:41.746897 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c323500f-74e7-47cd-b4fd-ab15b5fedfb5" path="/var/lib/kubelet/pods/c323500f-74e7-47cd-b4fd-ab15b5fedfb5/volumes" Jan 07 03:51:41 crc kubenswrapper[4980]: I0107 03:51:41.747630 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b" path="/var/lib/kubelet/pods/ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b/volumes" Jan 07 03:51:42 crc kubenswrapper[4980]: I0107 03:51:42.295546 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff693cd6-6951-4953-80e4-971c4386f05f","Type":"ContainerStarted","Data":"7bdbcf8540e25b8985fdd49f156a4c1e173cd520aeb1d87f1cff69e54138638c"} Jan 07 03:51:42 crc kubenswrapper[4980]: I0107 03:51:42.296040 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 07 03:51:42 crc kubenswrapper[4980]: I0107 03:51:42.323865 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.44884091 podStartE2EDuration="7.323844224s" podCreationTimestamp="2026-01-07 03:51:35 +0000 UTC" firstStartedPulling="2026-01-07 03:51:36.349967725 +0000 UTC m=+1142.915662460" lastFinishedPulling="2026-01-07 03:51:41.224970999 +0000 UTC m=+1147.790665774" observedRunningTime="2026-01-07 03:51:42.318069484 +0000 UTC m=+1148.883764219" watchObservedRunningTime="2026-01-07 03:51:42.323844224 +0000 UTC m=+1148.889538969" Jan 07 03:51:42 crc kubenswrapper[4980]: I0107 03:51:42.411200 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-547b7ddd64-7hclw" podUID="a806c806-4d43-4a04-aefa-0544f2a5175f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 07 03:51:42 crc kubenswrapper[4980]: E0107 03:51:42.799601 4980 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d74f1e6_2229_4ff7_8c80_b12d09285da4.slice/crio-980c543c3acef5c2165ccf6a21093814295d80a7c1dcb9e72159c8e4bea6b851\": RecentStats: unable to find data in memory cache]" Jan 07 03:51:43 crc kubenswrapper[4980]: I0107 03:51:43.306014 4980 generic.go:334] "Generic (PLEG): container finished" podID="4d3da149-0a31-4679-a580-69cc3d54ac1e" containerID="7aaacb682e147dcdc05f311aca2c9bf7c6ef64252da829e095e1e1fa139e7ece" exitCode=0 Jan 07 03:51:43 crc kubenswrapper[4980]: I0107 03:51:43.306220 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4d3da149-0a31-4679-a580-69cc3d54ac1e","Type":"ContainerDied","Data":"7aaacb682e147dcdc05f311aca2c9bf7c6ef64252da829e095e1e1fa139e7ece"} Jan 07 03:51:46 crc kubenswrapper[4980]: I0107 03:51:46.344656 4980 generic.go:334] "Generic (PLEG): container finished" podID="4d3da149-0a31-4679-a580-69cc3d54ac1e" containerID="f20caa55b58916282c09419f94d749f4611a29ec253caf31ed69882c63402372" exitCode=0 Jan 07 03:51:46 crc kubenswrapper[4980]: I0107 03:51:46.344749 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4d3da149-0a31-4679-a580-69cc3d54ac1e","Type":"ContainerDied","Data":"f20caa55b58916282c09419f94d749f4611a29ec253caf31ed69882c63402372"} Jan 07 03:51:46 crc kubenswrapper[4980]: I0107 03:51:46.862469 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.035343 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d3da149-0a31-4679-a580-69cc3d54ac1e-etc-machine-id\") pod \"4d3da149-0a31-4679-a580-69cc3d54ac1e\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.035757 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-config-data\") pod \"4d3da149-0a31-4679-a580-69cc3d54ac1e\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.035538 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d3da149-0a31-4679-a580-69cc3d54ac1e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4d3da149-0a31-4679-a580-69cc3d54ac1e" (UID: "4d3da149-0a31-4679-a580-69cc3d54ac1e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.035845 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f22g\" (UniqueName: \"kubernetes.io/projected/4d3da149-0a31-4679-a580-69cc3d54ac1e-kube-api-access-8f22g\") pod \"4d3da149-0a31-4679-a580-69cc3d54ac1e\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.035977 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-config-data-custom\") pod \"4d3da149-0a31-4679-a580-69cc3d54ac1e\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.036060 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-combined-ca-bundle\") pod \"4d3da149-0a31-4679-a580-69cc3d54ac1e\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.036099 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-scripts\") pod \"4d3da149-0a31-4679-a580-69cc3d54ac1e\" (UID: \"4d3da149-0a31-4679-a580-69cc3d54ac1e\") " Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.036428 4980 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d3da149-0a31-4679-a580-69cc3d54ac1e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.042804 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-scripts" (OuterVolumeSpecName: "scripts") pod "4d3da149-0a31-4679-a580-69cc3d54ac1e" (UID: "4d3da149-0a31-4679-a580-69cc3d54ac1e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.042878 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4d3da149-0a31-4679-a580-69cc3d54ac1e" (UID: "4d3da149-0a31-4679-a580-69cc3d54ac1e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.044696 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d3da149-0a31-4679-a580-69cc3d54ac1e-kube-api-access-8f22g" (OuterVolumeSpecName: "kube-api-access-8f22g") pod "4d3da149-0a31-4679-a580-69cc3d54ac1e" (UID: "4d3da149-0a31-4679-a580-69cc3d54ac1e"). InnerVolumeSpecName "kube-api-access-8f22g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.086679 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d3da149-0a31-4679-a580-69cc3d54ac1e" (UID: "4d3da149-0a31-4679-a580-69cc3d54ac1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.137992 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.138031 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f22g\" (UniqueName: \"kubernetes.io/projected/4d3da149-0a31-4679-a580-69cc3d54ac1e-kube-api-access-8f22g\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.138046 4980 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.138057 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.159545 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-config-data" (OuterVolumeSpecName: "config-data") pod "4d3da149-0a31-4679-a580-69cc3d54ac1e" (UID: "4d3da149-0a31-4679-a580-69cc3d54ac1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.239867 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3da149-0a31-4679-a580-69cc3d54ac1e-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.360021 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4d3da149-0a31-4679-a580-69cc3d54ac1e","Type":"ContainerDied","Data":"c3901487edf79f978c9d4bf18af903e89f937bf0effc41aa75bf2624cd202473"} Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.361023 4980 scope.go:117] "RemoveContainer" containerID="7aaacb682e147dcdc05f311aca2c9bf7c6ef64252da829e095e1e1fa139e7ece" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.361113 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.391516 4980 scope.go:117] "RemoveContainer" containerID="f20caa55b58916282c09419f94d749f4611a29ec253caf31ed69882c63402372" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.402932 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.418616 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.429660 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 07 03:51:47 crc kubenswrapper[4980]: E0107 03:51:47.430123 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c323500f-74e7-47cd-b4fd-ab15b5fedfb5" containerName="neutron-api" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.430145 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c323500f-74e7-47cd-b4fd-ab15b5fedfb5" containerName="neutron-api" Jan 07 03:51:47 crc kubenswrapper[4980]: E0107 03:51:47.430159 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b" containerName="init" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.430166 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b" containerName="init" Jan 07 03:51:47 crc kubenswrapper[4980]: E0107 03:51:47.430178 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c323500f-74e7-47cd-b4fd-ab15b5fedfb5" containerName="neutron-httpd" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.430184 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c323500f-74e7-47cd-b4fd-ab15b5fedfb5" containerName="neutron-httpd" Jan 07 03:51:47 crc kubenswrapper[4980]: E0107 03:51:47.430211 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d3da149-0a31-4679-a580-69cc3d54ac1e" containerName="probe" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.430217 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d3da149-0a31-4679-a580-69cc3d54ac1e" containerName="probe" Jan 07 03:51:47 crc kubenswrapper[4980]: E0107 03:51:47.430236 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b" containerName="dnsmasq-dns" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.430243 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b" containerName="dnsmasq-dns" Jan 07 03:51:47 crc kubenswrapper[4980]: E0107 03:51:47.430253 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d3da149-0a31-4679-a580-69cc3d54ac1e" containerName="cinder-scheduler" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.430259 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d3da149-0a31-4679-a580-69cc3d54ac1e" containerName="cinder-scheduler" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.430432 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea7b650c-cb7c-4bcc-a1c3-c3976e0c057b" containerName="dnsmasq-dns" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.430449 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d3da149-0a31-4679-a580-69cc3d54ac1e" containerName="cinder-scheduler" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.430460 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d3da149-0a31-4679-a580-69cc3d54ac1e" containerName="probe" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.430468 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c323500f-74e7-47cd-b4fd-ab15b5fedfb5" containerName="neutron-api" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.430478 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c323500f-74e7-47cd-b4fd-ab15b5fedfb5" containerName="neutron-httpd" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.431654 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.433753 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.438605 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.544814 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e880ebfc-1037-4101-b489-84fc6660d45f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e880ebfc-1037-4101-b489-84fc6660d45f\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.545109 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e880ebfc-1037-4101-b489-84fc6660d45f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e880ebfc-1037-4101-b489-84fc6660d45f\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.545202 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mll52\" (UniqueName: \"kubernetes.io/projected/e880ebfc-1037-4101-b489-84fc6660d45f-kube-api-access-mll52\") pod \"cinder-scheduler-0\" (UID: \"e880ebfc-1037-4101-b489-84fc6660d45f\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.545307 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e880ebfc-1037-4101-b489-84fc6660d45f-scripts\") pod \"cinder-scheduler-0\" (UID: \"e880ebfc-1037-4101-b489-84fc6660d45f\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.545397 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e880ebfc-1037-4101-b489-84fc6660d45f-config-data\") pod \"cinder-scheduler-0\" (UID: \"e880ebfc-1037-4101-b489-84fc6660d45f\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.545507 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e880ebfc-1037-4101-b489-84fc6660d45f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e880ebfc-1037-4101-b489-84fc6660d45f\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.646918 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e880ebfc-1037-4101-b489-84fc6660d45f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e880ebfc-1037-4101-b489-84fc6660d45f\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.647323 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e880ebfc-1037-4101-b489-84fc6660d45f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e880ebfc-1037-4101-b489-84fc6660d45f\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.647426 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e880ebfc-1037-4101-b489-84fc6660d45f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e880ebfc-1037-4101-b489-84fc6660d45f\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.647508 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mll52\" (UniqueName: \"kubernetes.io/projected/e880ebfc-1037-4101-b489-84fc6660d45f-kube-api-access-mll52\") pod \"cinder-scheduler-0\" (UID: \"e880ebfc-1037-4101-b489-84fc6660d45f\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.647609 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e880ebfc-1037-4101-b489-84fc6660d45f-scripts\") pod \"cinder-scheduler-0\" (UID: \"e880ebfc-1037-4101-b489-84fc6660d45f\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.647713 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e880ebfc-1037-4101-b489-84fc6660d45f-config-data\") pod \"cinder-scheduler-0\" (UID: \"e880ebfc-1037-4101-b489-84fc6660d45f\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.648355 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e880ebfc-1037-4101-b489-84fc6660d45f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e880ebfc-1037-4101-b489-84fc6660d45f\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.652070 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e880ebfc-1037-4101-b489-84fc6660d45f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e880ebfc-1037-4101-b489-84fc6660d45f\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.654413 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e880ebfc-1037-4101-b489-84fc6660d45f-config-data\") pod \"cinder-scheduler-0\" (UID: \"e880ebfc-1037-4101-b489-84fc6660d45f\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.656156 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e880ebfc-1037-4101-b489-84fc6660d45f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e880ebfc-1037-4101-b489-84fc6660d45f\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.658896 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e880ebfc-1037-4101-b489-84fc6660d45f-scripts\") pod \"cinder-scheduler-0\" (UID: \"e880ebfc-1037-4101-b489-84fc6660d45f\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.667972 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mll52\" (UniqueName: \"kubernetes.io/projected/e880ebfc-1037-4101-b489-84fc6660d45f-kube-api-access-mll52\") pod \"cinder-scheduler-0\" (UID: \"e880ebfc-1037-4101-b489-84fc6660d45f\") " pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.747382 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d3da149-0a31-4679-a580-69cc3d54ac1e" path="/var/lib/kubelet/pods/4d3da149-0a31-4679-a580-69cc3d54ac1e/volumes" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.748779 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 07 03:51:47 crc kubenswrapper[4980]: I0107 03:51:47.800716 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-85979fc5c6-rh7l6" Jan 07 03:51:48 crc kubenswrapper[4980]: I0107 03:51:48.297084 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 07 03:51:48 crc kubenswrapper[4980]: I0107 03:51:48.369967 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e880ebfc-1037-4101-b489-84fc6660d45f","Type":"ContainerStarted","Data":"432e56ef4ab8819b63aae4d0b4e4493ad4b08d203c5c69bf2e75878161d9dd15"} Jan 07 03:51:48 crc kubenswrapper[4980]: I0107 03:51:48.572089 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 07 03:51:49 crc kubenswrapper[4980]: I0107 03:51:49.382958 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e880ebfc-1037-4101-b489-84fc6660d45f","Type":"ContainerStarted","Data":"a718337b458fad4d63a4db8efc6b66f47f6f4fc171c3b07e58c718a1167d0628"} Jan 07 03:51:50 crc kubenswrapper[4980]: I0107 03:51:50.393884 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e880ebfc-1037-4101-b489-84fc6660d45f","Type":"ContainerStarted","Data":"3dd86518b2197106273f499d3ebdac11fe63d4881c9433cd95f4d92dfcac6c7b"} Jan 07 03:51:50 crc kubenswrapper[4980]: I0107 03:51:50.420908 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.420883658 podStartE2EDuration="3.420883658s" podCreationTimestamp="2026-01-07 03:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:51:50.413014433 +0000 UTC m=+1156.978709158" watchObservedRunningTime="2026-01-07 03:51:50.420883658 +0000 UTC m=+1156.986578403" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.647616 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-85968999bf-kv4dj"] Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.650058 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.656142 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.656424 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.656708 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.659489 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85968999bf-kv4dj"] Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.726206 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df0744c9-9130-4abb-be49-156d72cc1a20-run-httpd\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.726247 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df0744c9-9130-4abb-be49-156d72cc1a20-etc-swift\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.726276 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df0744c9-9130-4abb-be49-156d72cc1a20-log-httpd\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.726431 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df0744c9-9130-4abb-be49-156d72cc1a20-config-data\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.726667 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0744c9-9130-4abb-be49-156d72cc1a20-internal-tls-certs\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.726795 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0744c9-9130-4abb-be49-156d72cc1a20-combined-ca-bundle\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.726961 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljc8w\" (UniqueName: \"kubernetes.io/projected/df0744c9-9130-4abb-be49-156d72cc1a20-kube-api-access-ljc8w\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.726997 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0744c9-9130-4abb-be49-156d72cc1a20-public-tls-certs\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.828646 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0744c9-9130-4abb-be49-156d72cc1a20-combined-ca-bundle\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.828741 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljc8w\" (UniqueName: \"kubernetes.io/projected/df0744c9-9130-4abb-be49-156d72cc1a20-kube-api-access-ljc8w\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.828760 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0744c9-9130-4abb-be49-156d72cc1a20-public-tls-certs\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.828820 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df0744c9-9130-4abb-be49-156d72cc1a20-run-httpd\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.828851 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df0744c9-9130-4abb-be49-156d72cc1a20-etc-swift\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.828881 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df0744c9-9130-4abb-be49-156d72cc1a20-log-httpd\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.828915 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df0744c9-9130-4abb-be49-156d72cc1a20-config-data\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.828968 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0744c9-9130-4abb-be49-156d72cc1a20-internal-tls-certs\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.830272 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df0744c9-9130-4abb-be49-156d72cc1a20-run-httpd\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.831339 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df0744c9-9130-4abb-be49-156d72cc1a20-log-httpd\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.837375 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0744c9-9130-4abb-be49-156d72cc1a20-internal-tls-certs\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.838182 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0744c9-9130-4abb-be49-156d72cc1a20-public-tls-certs\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.840086 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0744c9-9130-4abb-be49-156d72cc1a20-combined-ca-bundle\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.849312 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df0744c9-9130-4abb-be49-156d72cc1a20-config-data\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.851626 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df0744c9-9130-4abb-be49-156d72cc1a20-etc-swift\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.864563 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljc8w\" (UniqueName: \"kubernetes.io/projected/df0744c9-9130-4abb-be49-156d72cc1a20-kube-api-access-ljc8w\") pod \"swift-proxy-85968999bf-kv4dj\" (UID: \"df0744c9-9130-4abb-be49-156d72cc1a20\") " pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.938654 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.941763 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f54697b86-x8z4v" Jan 07 03:51:51 crc kubenswrapper[4980]: I0107 03:51:51.969746 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.411608 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-547b7ddd64-7hclw" podUID="a806c806-4d43-4a04-aefa-0544f2a5175f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.558165 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.559264 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.563291 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.563500 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.563712 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-lc52g" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.572632 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.641774 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85968999bf-kv4dj"] Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.649426 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2bvc\" (UniqueName: \"kubernetes.io/projected/ec7c2df8-5955-4063-831a-7d1371e5e983-kube-api-access-t2bvc\") pod \"openstackclient\" (UID: \"ec7c2df8-5955-4063-831a-7d1371e5e983\") " pod="openstack/openstackclient" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.649472 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec7c2df8-5955-4063-831a-7d1371e5e983-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ec7c2df8-5955-4063-831a-7d1371e5e983\") " pod="openstack/openstackclient" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.649501 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ec7c2df8-5955-4063-831a-7d1371e5e983-openstack-config\") pod \"openstackclient\" (UID: \"ec7c2df8-5955-4063-831a-7d1371e5e983\") " pod="openstack/openstackclient" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.649547 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ec7c2df8-5955-4063-831a-7d1371e5e983-openstack-config-secret\") pod \"openstackclient\" (UID: \"ec7c2df8-5955-4063-831a-7d1371e5e983\") " pod="openstack/openstackclient" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.749254 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.750798 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ec7c2df8-5955-4063-831a-7d1371e5e983-openstack-config\") pod \"openstackclient\" (UID: \"ec7c2df8-5955-4063-831a-7d1371e5e983\") " pod="openstack/openstackclient" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.750871 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ec7c2df8-5955-4063-831a-7d1371e5e983-openstack-config-secret\") pod \"openstackclient\" (UID: \"ec7c2df8-5955-4063-831a-7d1371e5e983\") " pod="openstack/openstackclient" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.750967 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2bvc\" (UniqueName: \"kubernetes.io/projected/ec7c2df8-5955-4063-831a-7d1371e5e983-kube-api-access-t2bvc\") pod \"openstackclient\" (UID: \"ec7c2df8-5955-4063-831a-7d1371e5e983\") " pod="openstack/openstackclient" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.750997 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec7c2df8-5955-4063-831a-7d1371e5e983-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ec7c2df8-5955-4063-831a-7d1371e5e983\") " pod="openstack/openstackclient" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.752261 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ec7c2df8-5955-4063-831a-7d1371e5e983-openstack-config\") pod \"openstackclient\" (UID: \"ec7c2df8-5955-4063-831a-7d1371e5e983\") " pod="openstack/openstackclient" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.754566 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec7c2df8-5955-4063-831a-7d1371e5e983-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ec7c2df8-5955-4063-831a-7d1371e5e983\") " pod="openstack/openstackclient" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.760071 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ec7c2df8-5955-4063-831a-7d1371e5e983-openstack-config-secret\") pod \"openstackclient\" (UID: \"ec7c2df8-5955-4063-831a-7d1371e5e983\") " pod="openstack/openstackclient" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.767937 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2bvc\" (UniqueName: \"kubernetes.io/projected/ec7c2df8-5955-4063-831a-7d1371e5e983-kube-api-access-t2bvc\") pod \"openstackclient\" (UID: \"ec7c2df8-5955-4063-831a-7d1371e5e983\") " pod="openstack/openstackclient" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.878566 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.975801 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.976187 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff693cd6-6951-4953-80e4-971c4386f05f" containerName="ceilometer-central-agent" containerID="cri-o://44e7e471ddd3b3cd348d87b4fb73275fa461fbadc1265ddf1e6cb385891c4984" gracePeriod=30 Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.977199 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff693cd6-6951-4953-80e4-971c4386f05f" containerName="proxy-httpd" containerID="cri-o://7bdbcf8540e25b8985fdd49f156a4c1e173cd520aeb1d87f1cff69e54138638c" gracePeriod=30 Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.977357 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff693cd6-6951-4953-80e4-971c4386f05f" containerName="ceilometer-notification-agent" containerID="cri-o://1c9809ff29802c243ea3d07804853713d994898880483c553897f939014c9c27" gracePeriod=30 Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.977445 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff693cd6-6951-4953-80e4-971c4386f05f" containerName="sg-core" containerID="cri-o://a3224392cbcfb343340b119966368e19797a5eedd518d899919e1e45cd1ccd1b" gracePeriod=30 Jan 07 03:51:52 crc kubenswrapper[4980]: I0107 03:51:52.991183 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ff693cd6-6951-4953-80e4-971c4386f05f" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.165:3000/\": EOF" Jan 07 03:51:53 crc kubenswrapper[4980]: E0107 03:51:53.135646 4980 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d74f1e6_2229_4ff7_8c80_b12d09285da4.slice/crio-980c543c3acef5c2165ccf6a21093814295d80a7c1dcb9e72159c8e4bea6b851\": RecentStats: unable to find data in memory cache]" Jan 07 03:51:53 crc kubenswrapper[4980]: I0107 03:51:53.428398 4980 generic.go:334] "Generic (PLEG): container finished" podID="ff693cd6-6951-4953-80e4-971c4386f05f" containerID="7bdbcf8540e25b8985fdd49f156a4c1e173cd520aeb1d87f1cff69e54138638c" exitCode=0 Jan 07 03:51:53 crc kubenswrapper[4980]: I0107 03:51:53.428809 4980 generic.go:334] "Generic (PLEG): container finished" podID="ff693cd6-6951-4953-80e4-971c4386f05f" containerID="a3224392cbcfb343340b119966368e19797a5eedd518d899919e1e45cd1ccd1b" exitCode=2 Jan 07 03:51:53 crc kubenswrapper[4980]: I0107 03:51:53.428821 4980 generic.go:334] "Generic (PLEG): container finished" podID="ff693cd6-6951-4953-80e4-971c4386f05f" containerID="44e7e471ddd3b3cd348d87b4fb73275fa461fbadc1265ddf1e6cb385891c4984" exitCode=0 Jan 07 03:51:53 crc kubenswrapper[4980]: I0107 03:51:53.428486 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff693cd6-6951-4953-80e4-971c4386f05f","Type":"ContainerDied","Data":"7bdbcf8540e25b8985fdd49f156a4c1e173cd520aeb1d87f1cff69e54138638c"} Jan 07 03:51:53 crc kubenswrapper[4980]: I0107 03:51:53.428906 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff693cd6-6951-4953-80e4-971c4386f05f","Type":"ContainerDied","Data":"a3224392cbcfb343340b119966368e19797a5eedd518d899919e1e45cd1ccd1b"} Jan 07 03:51:53 crc kubenswrapper[4980]: I0107 03:51:53.428923 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff693cd6-6951-4953-80e4-971c4386f05f","Type":"ContainerDied","Data":"44e7e471ddd3b3cd348d87b4fb73275fa461fbadc1265ddf1e6cb385891c4984"} Jan 07 03:51:53 crc kubenswrapper[4980]: I0107 03:51:53.431371 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85968999bf-kv4dj" event={"ID":"df0744c9-9130-4abb-be49-156d72cc1a20","Type":"ContainerStarted","Data":"8a939f75a0242e8d00317c92c2c4be6b08aacf68413b28c3b0a11140389b98cb"} Jan 07 03:51:53 crc kubenswrapper[4980]: I0107 03:51:53.431423 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85968999bf-kv4dj" event={"ID":"df0744c9-9130-4abb-be49-156d72cc1a20","Type":"ContainerStarted","Data":"684696797ff44b97d39351b30b95c8122f16354844e7a7af2a90e4d94bb7f4a8"} Jan 07 03:51:53 crc kubenswrapper[4980]: I0107 03:51:53.431438 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85968999bf-kv4dj" event={"ID":"df0744c9-9130-4abb-be49-156d72cc1a20","Type":"ContainerStarted","Data":"f17ddce8fdd0757184e420b181cbd2de6a62cf0cc836d1c3b539a372545ad625"} Jan 07 03:51:53 crc kubenswrapper[4980]: I0107 03:51:53.431629 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:53 crc kubenswrapper[4980]: I0107 03:51:53.432011 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:51:53 crc kubenswrapper[4980]: I0107 03:51:53.455150 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-85968999bf-kv4dj" podStartSLOduration=2.455123419 podStartE2EDuration="2.455123419s" podCreationTimestamp="2026-01-07 03:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:51:53.451935439 +0000 UTC m=+1160.017630194" watchObservedRunningTime="2026-01-07 03:51:53.455123419 +0000 UTC m=+1160.020818164" Jan 07 03:51:53 crc kubenswrapper[4980]: W0107 03:51:53.472629 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec7c2df8_5955_4063_831a_7d1371e5e983.slice/crio-db92fe25e26769cac68f0c2205580cc4b79590ee1748bbef8da89b59f413deb2 WatchSource:0}: Error finding container db92fe25e26769cac68f0c2205580cc4b79590ee1748bbef8da89b59f413deb2: Status 404 returned error can't find the container with id db92fe25e26769cac68f0c2205580cc4b79590ee1748bbef8da89b59f413deb2 Jan 07 03:51:53 crc kubenswrapper[4980]: I0107 03:51:53.478756 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.494808 4980 generic.go:334] "Generic (PLEG): container finished" podID="ff693cd6-6951-4953-80e4-971c4386f05f" containerID="1c9809ff29802c243ea3d07804853713d994898880483c553897f939014c9c27" exitCode=0 Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.495134 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff693cd6-6951-4953-80e4-971c4386f05f","Type":"ContainerDied","Data":"1c9809ff29802c243ea3d07804853713d994898880483c553897f939014c9c27"} Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.502100 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ec7c2df8-5955-4063-831a-7d1371e5e983","Type":"ContainerStarted","Data":"db92fe25e26769cac68f0c2205580cc4b79590ee1748bbef8da89b59f413deb2"} Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.532710 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.594655 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-combined-ca-bundle\") pod \"ff693cd6-6951-4953-80e4-971c4386f05f\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.595136 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-scripts\") pod \"ff693cd6-6951-4953-80e4-971c4386f05f\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.595199 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff693cd6-6951-4953-80e4-971c4386f05f-log-httpd\") pod \"ff693cd6-6951-4953-80e4-971c4386f05f\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.595236 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-sg-core-conf-yaml\") pod \"ff693cd6-6951-4953-80e4-971c4386f05f\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.595280 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-config-data\") pod \"ff693cd6-6951-4953-80e4-971c4386f05f\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.595302 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4lp7\" (UniqueName: \"kubernetes.io/projected/ff693cd6-6951-4953-80e4-971c4386f05f-kube-api-access-p4lp7\") pod \"ff693cd6-6951-4953-80e4-971c4386f05f\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.595409 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff693cd6-6951-4953-80e4-971c4386f05f-run-httpd\") pod \"ff693cd6-6951-4953-80e4-971c4386f05f\" (UID: \"ff693cd6-6951-4953-80e4-971c4386f05f\") " Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.597050 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff693cd6-6951-4953-80e4-971c4386f05f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ff693cd6-6951-4953-80e4-971c4386f05f" (UID: "ff693cd6-6951-4953-80e4-971c4386f05f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.597654 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff693cd6-6951-4953-80e4-971c4386f05f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ff693cd6-6951-4953-80e4-971c4386f05f" (UID: "ff693cd6-6951-4953-80e4-971c4386f05f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.621424 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff693cd6-6951-4953-80e4-971c4386f05f-kube-api-access-p4lp7" (OuterVolumeSpecName: "kube-api-access-p4lp7") pod "ff693cd6-6951-4953-80e4-971c4386f05f" (UID: "ff693cd6-6951-4953-80e4-971c4386f05f"). InnerVolumeSpecName "kube-api-access-p4lp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.623674 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-scripts" (OuterVolumeSpecName: "scripts") pod "ff693cd6-6951-4953-80e4-971c4386f05f" (UID: "ff693cd6-6951-4953-80e4-971c4386f05f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.629659 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ff693cd6-6951-4953-80e4-971c4386f05f" (UID: "ff693cd6-6951-4953-80e4-971c4386f05f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.685260 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff693cd6-6951-4953-80e4-971c4386f05f" (UID: "ff693cd6-6951-4953-80e4-971c4386f05f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.700265 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.700355 4980 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff693cd6-6951-4953-80e4-971c4386f05f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.700371 4980 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.700386 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4lp7\" (UniqueName: \"kubernetes.io/projected/ff693cd6-6951-4953-80e4-971c4386f05f-kube-api-access-p4lp7\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.700395 4980 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff693cd6-6951-4953-80e4-971c4386f05f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.700405 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.716695 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-config-data" (OuterVolumeSpecName: "config-data") pod "ff693cd6-6951-4953-80e4-971c4386f05f" (UID: "ff693cd6-6951-4953-80e4-971c4386f05f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:51:54 crc kubenswrapper[4980]: I0107 03:51:54.802813 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff693cd6-6951-4953-80e4-971c4386f05f-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.513693 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff693cd6-6951-4953-80e4-971c4386f05f","Type":"ContainerDied","Data":"2d9e659b168460144e1f078a6e30097398fdaf9ea91b360a587bac877993103e"} Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.513740 4980 scope.go:117] "RemoveContainer" containerID="7bdbcf8540e25b8985fdd49f156a4c1e173cd520aeb1d87f1cff69e54138638c" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.513866 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.553922 4980 scope.go:117] "RemoveContainer" containerID="a3224392cbcfb343340b119966368e19797a5eedd518d899919e1e45cd1ccd1b" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.557642 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.571319 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.583439 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:51:55 crc kubenswrapper[4980]: E0107 03:51:55.583802 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff693cd6-6951-4953-80e4-971c4386f05f" containerName="proxy-httpd" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.583818 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff693cd6-6951-4953-80e4-971c4386f05f" containerName="proxy-httpd" Jan 07 03:51:55 crc kubenswrapper[4980]: E0107 03:51:55.583844 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff693cd6-6951-4953-80e4-971c4386f05f" containerName="ceilometer-central-agent" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.583851 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff693cd6-6951-4953-80e4-971c4386f05f" containerName="ceilometer-central-agent" Jan 07 03:51:55 crc kubenswrapper[4980]: E0107 03:51:55.583863 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff693cd6-6951-4953-80e4-971c4386f05f" containerName="ceilometer-notification-agent" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.583869 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff693cd6-6951-4953-80e4-971c4386f05f" containerName="ceilometer-notification-agent" Jan 07 03:51:55 crc kubenswrapper[4980]: E0107 03:51:55.583886 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff693cd6-6951-4953-80e4-971c4386f05f" containerName="sg-core" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.583893 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff693cd6-6951-4953-80e4-971c4386f05f" containerName="sg-core" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.584044 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff693cd6-6951-4953-80e4-971c4386f05f" containerName="sg-core" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.584053 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff693cd6-6951-4953-80e4-971c4386f05f" containerName="ceilometer-notification-agent" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.584066 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff693cd6-6951-4953-80e4-971c4386f05f" containerName="ceilometer-central-agent" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.584073 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff693cd6-6951-4953-80e4-971c4386f05f" containerName="proxy-httpd" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.586686 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.590994 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.591059 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.593293 4980 scope.go:117] "RemoveContainer" containerID="1c9809ff29802c243ea3d07804853713d994898880483c553897f939014c9c27" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.620453 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.638294 4980 scope.go:117] "RemoveContainer" containerID="44e7e471ddd3b3cd348d87b4fb73275fa461fbadc1265ddf1e6cb385891c4984" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.731857 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.732160 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa2c32-8209-48d0-8757-34e5bf86dd75-run-httpd\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.732310 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw42t\" (UniqueName: \"kubernetes.io/projected/c7aa2c32-8209-48d0-8757-34e5bf86dd75-kube-api-access-dw42t\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.732406 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-scripts\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.732509 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-config-data\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.732614 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa2c32-8209-48d0-8757-34e5bf86dd75-log-httpd\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.732817 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.751490 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff693cd6-6951-4953-80e4-971c4386f05f" path="/var/lib/kubelet/pods/ff693cd6-6951-4953-80e4-971c4386f05f/volumes" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.835156 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa2c32-8209-48d0-8757-34e5bf86dd75-run-httpd\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.835759 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa2c32-8209-48d0-8757-34e5bf86dd75-run-httpd\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.836937 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw42t\" (UniqueName: \"kubernetes.io/projected/c7aa2c32-8209-48d0-8757-34e5bf86dd75-kube-api-access-dw42t\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.837345 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-scripts\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.838456 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-config-data\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.838894 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa2c32-8209-48d0-8757-34e5bf86dd75-log-httpd\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.839198 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa2c32-8209-48d0-8757-34e5bf86dd75-log-httpd\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.839652 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.841602 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.845138 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-scripts\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.845767 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.846524 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.852079 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-config-data\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.855680 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw42t\" (UniqueName: \"kubernetes.io/projected/c7aa2c32-8209-48d0-8757-34e5bf86dd75-kube-api-access-dw42t\") pod \"ceilometer-0\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " pod="openstack/ceilometer-0" Jan 07 03:51:55 crc kubenswrapper[4980]: I0107 03:51:55.905377 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:51:56 crc kubenswrapper[4980]: I0107 03:51:56.427720 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:51:56 crc kubenswrapper[4980]: W0107 03:51:56.435300 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7aa2c32_8209_48d0_8757_34e5bf86dd75.slice/crio-baa433a6adb6e8fb9187dc74bf9f81a0e3956efee300ae5aa31c7bda70e073cc WatchSource:0}: Error finding container baa433a6adb6e8fb9187dc74bf9f81a0e3956efee300ae5aa31c7bda70e073cc: Status 404 returned error can't find the container with id baa433a6adb6e8fb9187dc74bf9f81a0e3956efee300ae5aa31c7bda70e073cc Jan 07 03:51:56 crc kubenswrapper[4980]: I0107 03:51:56.529975 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa2c32-8209-48d0-8757-34e5bf86dd75","Type":"ContainerStarted","Data":"baa433a6adb6e8fb9187dc74bf9f81a0e3956efee300ae5aa31c7bda70e073cc"} Jan 07 03:51:57 crc kubenswrapper[4980]: I0107 03:51:57.543369 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa2c32-8209-48d0-8757-34e5bf86dd75","Type":"ContainerStarted","Data":"d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22"} Jan 07 03:51:57 crc kubenswrapper[4980]: I0107 03:51:57.967261 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 07 03:51:58 crc kubenswrapper[4980]: I0107 03:51:58.563377 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa2c32-8209-48d0-8757-34e5bf86dd75","Type":"ContainerStarted","Data":"5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993"} Jan 07 03:51:58 crc kubenswrapper[4980]: I0107 03:51:58.563761 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa2c32-8209-48d0-8757-34e5bf86dd75","Type":"ContainerStarted","Data":"ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a"} Jan 07 03:52:01 crc kubenswrapper[4980]: I0107 03:52:01.983232 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:52:01 crc kubenswrapper[4980]: I0107 03:52:01.983644 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85968999bf-kv4dj" Jan 07 03:52:02 crc kubenswrapper[4980]: I0107 03:52:02.410777 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-547b7ddd64-7hclw" podUID="a806c806-4d43-4a04-aefa-0544f2a5175f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 07 03:52:02 crc kubenswrapper[4980]: I0107 03:52:02.411140 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:52:03 crc kubenswrapper[4980]: I0107 03:52:03.389605 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:52:03 crc kubenswrapper[4980]: E0107 03:52:03.414509 4980 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d74f1e6_2229_4ff7_8c80_b12d09285da4.slice/crio-980c543c3acef5c2165ccf6a21093814295d80a7c1dcb9e72159c8e4bea6b851\": RecentStats: unable to find data in memory cache]" Jan 07 03:52:04 crc kubenswrapper[4980]: I0107 03:52:04.621130 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ec7c2df8-5955-4063-831a-7d1371e5e983","Type":"ContainerStarted","Data":"bf462bbb19af0f818c526e89665ae5ff72ce821efeb5cb4a8d4b97fe797b75be"} Jan 07 03:52:04 crc kubenswrapper[4980]: I0107 03:52:04.624565 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa2c32-8209-48d0-8757-34e5bf86dd75","Type":"ContainerStarted","Data":"9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b"} Jan 07 03:52:04 crc kubenswrapper[4980]: I0107 03:52:04.624760 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerName="ceilometer-central-agent" containerID="cri-o://d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22" gracePeriod=30 Jan 07 03:52:04 crc kubenswrapper[4980]: I0107 03:52:04.625014 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 07 03:52:04 crc kubenswrapper[4980]: I0107 03:52:04.625042 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerName="sg-core" containerID="cri-o://5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993" gracePeriod=30 Jan 07 03:52:04 crc kubenswrapper[4980]: I0107 03:52:04.625089 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerName="ceilometer-notification-agent" containerID="cri-o://ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a" gracePeriod=30 Jan 07 03:52:04 crc kubenswrapper[4980]: I0107 03:52:04.625064 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerName="proxy-httpd" containerID="cri-o://9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b" gracePeriod=30 Jan 07 03:52:04 crc kubenswrapper[4980]: I0107 03:52:04.649138 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.126217586 podStartE2EDuration="12.64911101s" podCreationTimestamp="2026-01-07 03:51:52 +0000 UTC" firstStartedPulling="2026-01-07 03:51:53.474341358 +0000 UTC m=+1160.040036093" lastFinishedPulling="2026-01-07 03:52:03.997234782 +0000 UTC m=+1170.562929517" observedRunningTime="2026-01-07 03:52:04.640080998 +0000 UTC m=+1171.205775733" watchObservedRunningTime="2026-01-07 03:52:04.64911101 +0000 UTC m=+1171.214805745" Jan 07 03:52:04 crc kubenswrapper[4980]: I0107 03:52:04.684686 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.127634364 podStartE2EDuration="9.684660659s" podCreationTimestamp="2026-01-07 03:51:55 +0000 UTC" firstStartedPulling="2026-01-07 03:51:56.439502784 +0000 UTC m=+1163.005197519" lastFinishedPulling="2026-01-07 03:52:03.996529049 +0000 UTC m=+1170.562223814" observedRunningTime="2026-01-07 03:52:04.67540411 +0000 UTC m=+1171.241098875" watchObservedRunningTime="2026-01-07 03:52:04.684660659 +0000 UTC m=+1171.250355424" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.455808 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.543101 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-config-data\") pod \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.543231 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-combined-ca-bundle\") pod \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.543269 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa2c32-8209-48d0-8757-34e5bf86dd75-log-httpd\") pod \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.544125 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-scripts\") pod \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.544219 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw42t\" (UniqueName: \"kubernetes.io/projected/c7aa2c32-8209-48d0-8757-34e5bf86dd75-kube-api-access-dw42t\") pod \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.544271 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-sg-core-conf-yaml\") pod \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.544342 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa2c32-8209-48d0-8757-34e5bf86dd75-run-httpd\") pod \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\" (UID: \"c7aa2c32-8209-48d0-8757-34e5bf86dd75\") " Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.544951 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7aa2c32-8209-48d0-8757-34e5bf86dd75-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c7aa2c32-8209-48d0-8757-34e5bf86dd75" (UID: "c7aa2c32-8209-48d0-8757-34e5bf86dd75"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.546305 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7aa2c32-8209-48d0-8757-34e5bf86dd75-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c7aa2c32-8209-48d0-8757-34e5bf86dd75" (UID: "c7aa2c32-8209-48d0-8757-34e5bf86dd75"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.549944 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-scripts" (OuterVolumeSpecName: "scripts") pod "c7aa2c32-8209-48d0-8757-34e5bf86dd75" (UID: "c7aa2c32-8209-48d0-8757-34e5bf86dd75"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.552158 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7aa2c32-8209-48d0-8757-34e5bf86dd75-kube-api-access-dw42t" (OuterVolumeSpecName: "kube-api-access-dw42t") pod "c7aa2c32-8209-48d0-8757-34e5bf86dd75" (UID: "c7aa2c32-8209-48d0-8757-34e5bf86dd75"). InnerVolumeSpecName "kube-api-access-dw42t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.617375 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-zcdps"] Jan 07 03:52:05 crc kubenswrapper[4980]: E0107 03:52:05.617866 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerName="sg-core" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.617890 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerName="sg-core" Jan 07 03:52:05 crc kubenswrapper[4980]: E0107 03:52:05.617914 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerName="proxy-httpd" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.617952 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerName="proxy-httpd" Jan 07 03:52:05 crc kubenswrapper[4980]: E0107 03:52:05.617981 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerName="ceilometer-central-agent" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.617990 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerName="ceilometer-central-agent" Jan 07 03:52:05 crc kubenswrapper[4980]: E0107 03:52:05.618008 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerName="ceilometer-notification-agent" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.618019 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerName="ceilometer-notification-agent" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.618236 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerName="ceilometer-central-agent" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.618271 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerName="ceilometer-notification-agent" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.618287 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerName="sg-core" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.618299 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerName="proxy-httpd" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.619123 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zcdps" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.625571 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c7aa2c32-8209-48d0-8757-34e5bf86dd75" (UID: "c7aa2c32-8209-48d0-8757-34e5bf86dd75"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.638541 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-zcdps"] Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.642512 4980 generic.go:334] "Generic (PLEG): container finished" podID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerID="9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b" exitCode=0 Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.642534 4980 generic.go:334] "Generic (PLEG): container finished" podID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerID="5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993" exitCode=2 Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.642575 4980 generic.go:334] "Generic (PLEG): container finished" podID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerID="ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a" exitCode=0 Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.642575 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa2c32-8209-48d0-8757-34e5bf86dd75","Type":"ContainerDied","Data":"9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b"} Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.642601 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa2c32-8209-48d0-8757-34e5bf86dd75","Type":"ContainerDied","Data":"5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993"} Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.642584 4980 generic.go:334] "Generic (PLEG): container finished" podID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" containerID="d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22" exitCode=0 Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.642622 4980 scope.go:117] "RemoveContainer" containerID="9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.642669 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.642612 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa2c32-8209-48d0-8757-34e5bf86dd75","Type":"ContainerDied","Data":"ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a"} Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.642717 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa2c32-8209-48d0-8757-34e5bf86dd75","Type":"ContainerDied","Data":"d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22"} Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.642727 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa2c32-8209-48d0-8757-34e5bf86dd75","Type":"ContainerDied","Data":"baa433a6adb6e8fb9187dc74bf9f81a0e3956efee300ae5aa31c7bda70e073cc"} Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.646144 4980 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa2c32-8209-48d0-8757-34e5bf86dd75-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.646174 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.646187 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw42t\" (UniqueName: \"kubernetes.io/projected/c7aa2c32-8209-48d0-8757-34e5bf86dd75-kube-api-access-dw42t\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.646202 4980 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.646214 4980 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa2c32-8209-48d0-8757-34e5bf86dd75-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.680868 4980 scope.go:117] "RemoveContainer" containerID="5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.700197 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-jlvxp"] Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.707799 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jlvxp" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.710018 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7aa2c32-8209-48d0-8757-34e5bf86dd75" (UID: "c7aa2c32-8209-48d0-8757-34e5bf86dd75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.718763 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-config-data" (OuterVolumeSpecName: "config-data") pod "c7aa2c32-8209-48d0-8757-34e5bf86dd75" (UID: "c7aa2c32-8209-48d0-8757-34e5bf86dd75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.720859 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jlvxp"] Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.728046 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-771b-account-create-update-gnf2t"] Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.729586 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-771b-account-create-update-gnf2t" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.729893 4980 scope.go:117] "RemoveContainer" containerID="ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.732171 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.747688 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6626e80c-7ae0-45cc-a3f3-77d55d176b86-operator-scripts\") pod \"nova-api-db-create-zcdps\" (UID: \"6626e80c-7ae0-45cc-a3f3-77d55d176b86\") " pod="openstack/nova-api-db-create-zcdps" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.747777 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9b8k\" (UniqueName: \"kubernetes.io/projected/6626e80c-7ae0-45cc-a3f3-77d55d176b86-kube-api-access-p9b8k\") pod \"nova-api-db-create-zcdps\" (UID: \"6626e80c-7ae0-45cc-a3f3-77d55d176b86\") " pod="openstack/nova-api-db-create-zcdps" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.747907 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.747919 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7aa2c32-8209-48d0-8757-34e5bf86dd75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.752475 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-771b-account-create-update-gnf2t"] Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.758267 4980 scope.go:117] "RemoveContainer" containerID="d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.793023 4980 scope.go:117] "RemoveContainer" containerID="9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b" Jan 07 03:52:05 crc kubenswrapper[4980]: E0107 03:52:05.796009 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b\": container with ID starting with 9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b not found: ID does not exist" containerID="9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.796073 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b"} err="failed to get container status \"9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b\": rpc error: code = NotFound desc = could not find container \"9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b\": container with ID starting with 9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b not found: ID does not exist" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.796113 4980 scope.go:117] "RemoveContainer" containerID="5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993" Jan 07 03:52:05 crc kubenswrapper[4980]: E0107 03:52:05.796840 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993\": container with ID starting with 5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993 not found: ID does not exist" containerID="5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.796889 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993"} err="failed to get container status \"5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993\": rpc error: code = NotFound desc = could not find container \"5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993\": container with ID starting with 5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993 not found: ID does not exist" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.796918 4980 scope.go:117] "RemoveContainer" containerID="ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a" Jan 07 03:52:05 crc kubenswrapper[4980]: E0107 03:52:05.803653 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a\": container with ID starting with ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a not found: ID does not exist" containerID="ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.803707 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a"} err="failed to get container status \"ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a\": rpc error: code = NotFound desc = could not find container \"ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a\": container with ID starting with ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a not found: ID does not exist" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.803739 4980 scope.go:117] "RemoveContainer" containerID="d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22" Jan 07 03:52:05 crc kubenswrapper[4980]: E0107 03:52:05.804099 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22\": container with ID starting with d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22 not found: ID does not exist" containerID="d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.804140 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22"} err="failed to get container status \"d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22\": rpc error: code = NotFound desc = could not find container \"d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22\": container with ID starting with d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22 not found: ID does not exist" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.804159 4980 scope.go:117] "RemoveContainer" containerID="9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.815621 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b"} err="failed to get container status \"9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b\": rpc error: code = NotFound desc = could not find container \"9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b\": container with ID starting with 9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b not found: ID does not exist" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.815658 4980 scope.go:117] "RemoveContainer" containerID="5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.816002 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993"} err="failed to get container status \"5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993\": rpc error: code = NotFound desc = could not find container \"5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993\": container with ID starting with 5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993 not found: ID does not exist" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.816020 4980 scope.go:117] "RemoveContainer" containerID="ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.816332 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-8f6ft"] Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.817816 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8f6ft" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.818575 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a"} err="failed to get container status \"ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a\": rpc error: code = NotFound desc = could not find container \"ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a\": container with ID starting with ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a not found: ID does not exist" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.818622 4980 scope.go:117] "RemoveContainer" containerID="d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.822613 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22"} err="failed to get container status \"d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22\": rpc error: code = NotFound desc = could not find container \"d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22\": container with ID starting with d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22 not found: ID does not exist" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.822646 4980 scope.go:117] "RemoveContainer" containerID="9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.822983 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b"} err="failed to get container status \"9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b\": rpc error: code = NotFound desc = could not find container \"9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b\": container with ID starting with 9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b not found: ID does not exist" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.823005 4980 scope.go:117] "RemoveContainer" containerID="5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.823349 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993"} err="failed to get container status \"5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993\": rpc error: code = NotFound desc = could not find container \"5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993\": container with ID starting with 5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993 not found: ID does not exist" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.823368 4980 scope.go:117] "RemoveContainer" containerID="ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.823594 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a"} err="failed to get container status \"ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a\": rpc error: code = NotFound desc = could not find container \"ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a\": container with ID starting with ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a not found: ID does not exist" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.823612 4980 scope.go:117] "RemoveContainer" containerID="d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.823798 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22"} err="failed to get container status \"d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22\": rpc error: code = NotFound desc = could not find container \"d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22\": container with ID starting with d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22 not found: ID does not exist" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.823816 4980 scope.go:117] "RemoveContainer" containerID="9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.823977 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b"} err="failed to get container status \"9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b\": rpc error: code = NotFound desc = could not find container \"9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b\": container with ID starting with 9660b4c7073b2edabdcf967b2c5507e676ab90683cd703f50c1b134ab488d04b not found: ID does not exist" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.823994 4980 scope.go:117] "RemoveContainer" containerID="5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.824145 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993"} err="failed to get container status \"5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993\": rpc error: code = NotFound desc = could not find container \"5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993\": container with ID starting with 5763aaa8b952b3f16c35c7923b2cca5dfc6269f4aeabfd7b694ba2250211d993 not found: ID does not exist" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.824162 4980 scope.go:117] "RemoveContainer" containerID="ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.824336 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a"} err="failed to get container status \"ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a\": rpc error: code = NotFound desc = could not find container \"ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a\": container with ID starting with ae337e755076411d9a3c8a03a967412c1b1ed0b96edf17093e00379b1def184a not found: ID does not exist" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.824353 4980 scope.go:117] "RemoveContainer" containerID="d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.824845 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22"} err="failed to get container status \"d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22\": rpc error: code = NotFound desc = could not find container \"d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22\": container with ID starting with d1abd1ad9769677f31d82fe54c21ee2569b1823ab44579d271038d8d44e41f22 not found: ID does not exist" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.831613 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8f6ft"] Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.849529 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9b8k\" (UniqueName: \"kubernetes.io/projected/6626e80c-7ae0-45cc-a3f3-77d55d176b86-kube-api-access-p9b8k\") pod \"nova-api-db-create-zcdps\" (UID: \"6626e80c-7ae0-45cc-a3f3-77d55d176b86\") " pod="openstack/nova-api-db-create-zcdps" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.849734 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8e7d535-d7b0-4742-9f6a-f45e56965313-operator-scripts\") pod \"nova-cell0-db-create-jlvxp\" (UID: \"f8e7d535-d7b0-4742-9f6a-f45e56965313\") " pod="openstack/nova-cell0-db-create-jlvxp" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.849795 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6626e80c-7ae0-45cc-a3f3-77d55d176b86-operator-scripts\") pod \"nova-api-db-create-zcdps\" (UID: \"6626e80c-7ae0-45cc-a3f3-77d55d176b86\") " pod="openstack/nova-api-db-create-zcdps" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.849838 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16c025a9-1e74-4b76-aa08-b409e8ebfeda-operator-scripts\") pod \"nova-api-771b-account-create-update-gnf2t\" (UID: \"16c025a9-1e74-4b76-aa08-b409e8ebfeda\") " pod="openstack/nova-api-771b-account-create-update-gnf2t" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.849864 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpttr\" (UniqueName: \"kubernetes.io/projected/f8e7d535-d7b0-4742-9f6a-f45e56965313-kube-api-access-fpttr\") pod \"nova-cell0-db-create-jlvxp\" (UID: \"f8e7d535-d7b0-4742-9f6a-f45e56965313\") " pod="openstack/nova-cell0-db-create-jlvxp" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.849885 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvx7n\" (UniqueName: \"kubernetes.io/projected/16c025a9-1e74-4b76-aa08-b409e8ebfeda-kube-api-access-nvx7n\") pod \"nova-api-771b-account-create-update-gnf2t\" (UID: \"16c025a9-1e74-4b76-aa08-b409e8ebfeda\") " pod="openstack/nova-api-771b-account-create-update-gnf2t" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.850582 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6626e80c-7ae0-45cc-a3f3-77d55d176b86-operator-scripts\") pod \"nova-api-db-create-zcdps\" (UID: \"6626e80c-7ae0-45cc-a3f3-77d55d176b86\") " pod="openstack/nova-api-db-create-zcdps" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.874195 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9b8k\" (UniqueName: \"kubernetes.io/projected/6626e80c-7ae0-45cc-a3f3-77d55d176b86-kube-api-access-p9b8k\") pod \"nova-api-db-create-zcdps\" (UID: \"6626e80c-7ae0-45cc-a3f3-77d55d176b86\") " pod="openstack/nova-api-db-create-zcdps" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.897346 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-f4c7-account-create-update-vp8hz"] Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.898594 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f4c7-account-create-update-vp8hz" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.900636 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.938508 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f4c7-account-create-update-vp8hz"] Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.938965 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zcdps" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.952013 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phjg7\" (UniqueName: \"kubernetes.io/projected/e7d18756-8ad4-44c7-8ff0-5cd67e7d8585-kube-api-access-phjg7\") pod \"nova-cell1-db-create-8f6ft\" (UID: \"e7d18756-8ad4-44c7-8ff0-5cd67e7d8585\") " pod="openstack/nova-cell1-db-create-8f6ft" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.952242 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8e7d535-d7b0-4742-9f6a-f45e56965313-operator-scripts\") pod \"nova-cell0-db-create-jlvxp\" (UID: \"f8e7d535-d7b0-4742-9f6a-f45e56965313\") " pod="openstack/nova-cell0-db-create-jlvxp" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.952306 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16c025a9-1e74-4b76-aa08-b409e8ebfeda-operator-scripts\") pod \"nova-api-771b-account-create-update-gnf2t\" (UID: \"16c025a9-1e74-4b76-aa08-b409e8ebfeda\") " pod="openstack/nova-api-771b-account-create-update-gnf2t" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.952325 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7d18756-8ad4-44c7-8ff0-5cd67e7d8585-operator-scripts\") pod \"nova-cell1-db-create-8f6ft\" (UID: \"e7d18756-8ad4-44c7-8ff0-5cd67e7d8585\") " pod="openstack/nova-cell1-db-create-8f6ft" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.952349 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpttr\" (UniqueName: \"kubernetes.io/projected/f8e7d535-d7b0-4742-9f6a-f45e56965313-kube-api-access-fpttr\") pod \"nova-cell0-db-create-jlvxp\" (UID: \"f8e7d535-d7b0-4742-9f6a-f45e56965313\") " pod="openstack/nova-cell0-db-create-jlvxp" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.952371 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvx7n\" (UniqueName: \"kubernetes.io/projected/16c025a9-1e74-4b76-aa08-b409e8ebfeda-kube-api-access-nvx7n\") pod \"nova-api-771b-account-create-update-gnf2t\" (UID: \"16c025a9-1e74-4b76-aa08-b409e8ebfeda\") " pod="openstack/nova-api-771b-account-create-update-gnf2t" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.952396 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8xf5\" (UniqueName: \"kubernetes.io/projected/68715bdb-bc11-4440-9be6-9399c45ff882-kube-api-access-b8xf5\") pod \"nova-cell0-f4c7-account-create-update-vp8hz\" (UID: \"68715bdb-bc11-4440-9be6-9399c45ff882\") " pod="openstack/nova-cell0-f4c7-account-create-update-vp8hz" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.952437 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68715bdb-bc11-4440-9be6-9399c45ff882-operator-scripts\") pod \"nova-cell0-f4c7-account-create-update-vp8hz\" (UID: \"68715bdb-bc11-4440-9be6-9399c45ff882\") " pod="openstack/nova-cell0-f4c7-account-create-update-vp8hz" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.953348 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8e7d535-d7b0-4742-9f6a-f45e56965313-operator-scripts\") pod \"nova-cell0-db-create-jlvxp\" (UID: \"f8e7d535-d7b0-4742-9f6a-f45e56965313\") " pod="openstack/nova-cell0-db-create-jlvxp" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.953361 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16c025a9-1e74-4b76-aa08-b409e8ebfeda-operator-scripts\") pod \"nova-api-771b-account-create-update-gnf2t\" (UID: \"16c025a9-1e74-4b76-aa08-b409e8ebfeda\") " pod="openstack/nova-api-771b-account-create-update-gnf2t" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.972671 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpttr\" (UniqueName: \"kubernetes.io/projected/f8e7d535-d7b0-4742-9f6a-f45e56965313-kube-api-access-fpttr\") pod \"nova-cell0-db-create-jlvxp\" (UID: \"f8e7d535-d7b0-4742-9f6a-f45e56965313\") " pod="openstack/nova-cell0-db-create-jlvxp" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.981784 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvx7n\" (UniqueName: \"kubernetes.io/projected/16c025a9-1e74-4b76-aa08-b409e8ebfeda-kube-api-access-nvx7n\") pod \"nova-api-771b-account-create-update-gnf2t\" (UID: \"16c025a9-1e74-4b76-aa08-b409e8ebfeda\") " pod="openstack/nova-api-771b-account-create-update-gnf2t" Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.989442 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.991582 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="35351b71-653b-4428-8ced-16202fce5e62" containerName="glance-log" containerID="cri-o://cca09f3c7a1803a2e3add2b5b583ee11a0a446886476b6e3b81db5147efc7fc0" gracePeriod=30 Jan 07 03:52:05 crc kubenswrapper[4980]: I0107 03:52:05.991789 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="35351b71-653b-4428-8ced-16202fce5e62" containerName="glance-httpd" containerID="cri-o://4bdcc63e830c668157f4080e416db51ae79d8abfb508c653be5b1436216dc8a5" gracePeriod=30 Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.024926 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.038946 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jlvxp" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.039194 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.052609 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.053645 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phjg7\" (UniqueName: \"kubernetes.io/projected/e7d18756-8ad4-44c7-8ff0-5cd67e7d8585-kube-api-access-phjg7\") pod \"nova-cell1-db-create-8f6ft\" (UID: \"e7d18756-8ad4-44c7-8ff0-5cd67e7d8585\") " pod="openstack/nova-cell1-db-create-8f6ft" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.053751 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7d18756-8ad4-44c7-8ff0-5cd67e7d8585-operator-scripts\") pod \"nova-cell1-db-create-8f6ft\" (UID: \"e7d18756-8ad4-44c7-8ff0-5cd67e7d8585\") " pod="openstack/nova-cell1-db-create-8f6ft" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.053793 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8xf5\" (UniqueName: \"kubernetes.io/projected/68715bdb-bc11-4440-9be6-9399c45ff882-kube-api-access-b8xf5\") pod \"nova-cell0-f4c7-account-create-update-vp8hz\" (UID: \"68715bdb-bc11-4440-9be6-9399c45ff882\") " pod="openstack/nova-cell0-f4c7-account-create-update-vp8hz" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.053824 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68715bdb-bc11-4440-9be6-9399c45ff882-operator-scripts\") pod \"nova-cell0-f4c7-account-create-update-vp8hz\" (UID: \"68715bdb-bc11-4440-9be6-9399c45ff882\") " pod="openstack/nova-cell0-f4c7-account-create-update-vp8hz" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.054658 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7d18756-8ad4-44c7-8ff0-5cd67e7d8585-operator-scripts\") pod \"nova-cell1-db-create-8f6ft\" (UID: \"e7d18756-8ad4-44c7-8ff0-5cd67e7d8585\") " pod="openstack/nova-cell1-db-create-8f6ft" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.054793 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68715bdb-bc11-4440-9be6-9399c45ff882-operator-scripts\") pod \"nova-cell0-f4c7-account-create-update-vp8hz\" (UID: \"68715bdb-bc11-4440-9be6-9399c45ff882\") " pod="openstack/nova-cell0-f4c7-account-create-update-vp8hz" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.054846 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.057275 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.057375 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.057762 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-771b-account-create-update-gnf2t" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.071607 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.076398 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8xf5\" (UniqueName: \"kubernetes.io/projected/68715bdb-bc11-4440-9be6-9399c45ff882-kube-api-access-b8xf5\") pod \"nova-cell0-f4c7-account-create-update-vp8hz\" (UID: \"68715bdb-bc11-4440-9be6-9399c45ff882\") " pod="openstack/nova-cell0-f4c7-account-create-update-vp8hz" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.081494 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phjg7\" (UniqueName: \"kubernetes.io/projected/e7d18756-8ad4-44c7-8ff0-5cd67e7d8585-kube-api-access-phjg7\") pod \"nova-cell1-db-create-8f6ft\" (UID: \"e7d18756-8ad4-44c7-8ff0-5cd67e7d8585\") " pod="openstack/nova-cell1-db-create-8f6ft" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.122594 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8ad2-account-create-update-wglzx"] Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.123732 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ad2-account-create-update-wglzx" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.125922 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.130180 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8ad2-account-create-update-wglzx"] Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.140117 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8f6ft" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.156085 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae9ab52-7706-4348-ba49-0c7d0e884dda-operator-scripts\") pod \"nova-cell1-8ad2-account-create-update-wglzx\" (UID: \"7ae9ab52-7706-4348-ba49-0c7d0e884dda\") " pod="openstack/nova-cell1-8ad2-account-create-update-wglzx" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.156125 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs8gm\" (UniqueName: \"kubernetes.io/projected/7ae9ab52-7706-4348-ba49-0c7d0e884dda-kube-api-access-rs8gm\") pod \"nova-cell1-8ad2-account-create-update-wglzx\" (UID: \"7ae9ab52-7706-4348-ba49-0c7d0e884dda\") " pod="openstack/nova-cell1-8ad2-account-create-update-wglzx" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.156149 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c78e159b-369b-4085-8d58-71e513b0db32-run-httpd\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.156205 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwddj\" (UniqueName: \"kubernetes.io/projected/c78e159b-369b-4085-8d58-71e513b0db32-kube-api-access-kwddj\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.156285 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c78e159b-369b-4085-8d58-71e513b0db32-log-httpd\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.156512 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-config-data\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.156628 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.156743 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.156779 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-scripts\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.220839 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f4c7-account-create-update-vp8hz" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.258002 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-config-data\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.258050 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.258092 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.258109 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-scripts\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.258136 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae9ab52-7706-4348-ba49-0c7d0e884dda-operator-scripts\") pod \"nova-cell1-8ad2-account-create-update-wglzx\" (UID: \"7ae9ab52-7706-4348-ba49-0c7d0e884dda\") " pod="openstack/nova-cell1-8ad2-account-create-update-wglzx" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.258155 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs8gm\" (UniqueName: \"kubernetes.io/projected/7ae9ab52-7706-4348-ba49-0c7d0e884dda-kube-api-access-rs8gm\") pod \"nova-cell1-8ad2-account-create-update-wglzx\" (UID: \"7ae9ab52-7706-4348-ba49-0c7d0e884dda\") " pod="openstack/nova-cell1-8ad2-account-create-update-wglzx" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.258173 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c78e159b-369b-4085-8d58-71e513b0db32-run-httpd\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.258192 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwddj\" (UniqueName: \"kubernetes.io/projected/c78e159b-369b-4085-8d58-71e513b0db32-kube-api-access-kwddj\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.258224 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c78e159b-369b-4085-8d58-71e513b0db32-log-httpd\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.258840 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c78e159b-369b-4085-8d58-71e513b0db32-log-httpd\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.259486 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c78e159b-369b-4085-8d58-71e513b0db32-run-httpd\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.259793 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae9ab52-7706-4348-ba49-0c7d0e884dda-operator-scripts\") pod \"nova-cell1-8ad2-account-create-update-wglzx\" (UID: \"7ae9ab52-7706-4348-ba49-0c7d0e884dda\") " pod="openstack/nova-cell1-8ad2-account-create-update-wglzx" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.269899 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.270169 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-config-data\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.270765 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.281424 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-scripts\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.283925 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwddj\" (UniqueName: \"kubernetes.io/projected/c78e159b-369b-4085-8d58-71e513b0db32-kube-api-access-kwddj\") pod \"ceilometer-0\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.284795 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs8gm\" (UniqueName: \"kubernetes.io/projected/7ae9ab52-7706-4348-ba49-0c7d0e884dda-kube-api-access-rs8gm\") pod \"nova-cell1-8ad2-account-create-update-wglzx\" (UID: \"7ae9ab52-7706-4348-ba49-0c7d0e884dda\") " pod="openstack/nova-cell1-8ad2-account-create-update-wglzx" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.409852 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.418405 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ad2-account-create-update-wglzx" Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.599954 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-zcdps"] Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.607381 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jlvxp"] Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.659309 4980 generic.go:334] "Generic (PLEG): container finished" podID="35351b71-653b-4428-8ced-16202fce5e62" containerID="cca09f3c7a1803a2e3add2b5b583ee11a0a446886476b6e3b81db5147efc7fc0" exitCode=143 Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.659368 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"35351b71-653b-4428-8ced-16202fce5e62","Type":"ContainerDied","Data":"cca09f3c7a1803a2e3add2b5b583ee11a0a446886476b6e3b81db5147efc7fc0"} Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.661511 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jlvxp" event={"ID":"f8e7d535-d7b0-4742-9f6a-f45e56965313","Type":"ContainerStarted","Data":"d42851be29f2c220537d779ffc531b3a2fc1bbf0e62afa88fbd001daf8cf7ee8"} Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.662851 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zcdps" event={"ID":"6626e80c-7ae0-45cc-a3f3-77d55d176b86","Type":"ContainerStarted","Data":"dc1e3cc62f5be2319a33038057033222d41e0031695d82fcf6a854fcb1814df7"} Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.875292 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-771b-account-create-update-gnf2t"] Jan 07 03:52:06 crc kubenswrapper[4980]: W0107 03:52:06.891392 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16c025a9_1e74_4b76_aa08_b409e8ebfeda.slice/crio-9d429703586a15439f9aa2fda5966f0181a171cc6d5d5a3b5c28b4406da376b2 WatchSource:0}: Error finding container 9d429703586a15439f9aa2fda5966f0181a171cc6d5d5a3b5c28b4406da376b2: Status 404 returned error can't find the container with id 9d429703586a15439f9aa2fda5966f0181a171cc6d5d5a3b5c28b4406da376b2 Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.940246 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f4c7-account-create-update-vp8hz"] Jan 07 03:52:06 crc kubenswrapper[4980]: I0107 03:52:06.949152 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8f6ft"] Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.111267 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:52:07 crc kubenswrapper[4980]: W0107 03:52:07.129363 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc78e159b_369b_4085_8d58_71e513b0db32.slice/crio-91a7050a816a7424ee9b7434d9fe2172e763d1033c756839bc21dd3bf449ffe3 WatchSource:0}: Error finding container 91a7050a816a7424ee9b7434d9fe2172e763d1033c756839bc21dd3bf449ffe3: Status 404 returned error can't find the container with id 91a7050a816a7424ee9b7434d9fe2172e763d1033c756839bc21dd3bf449ffe3 Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.133994 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8ad2-account-create-update-wglzx"] Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.551993 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.553500 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" containerName="glance-log" containerID="cri-o://17de836762208ae15920ea22b9836e1567d254d8d296e43e52ac798bb595a9a1" gracePeriod=30 Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.553542 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" containerName="glance-httpd" containerID="cri-o://8baa92e6dd0791333bf2483745673cd8a263959940c42a2d3c407410b2c97cdb" gracePeriod=30 Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.683341 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f4c7-account-create-update-vp8hz" event={"ID":"68715bdb-bc11-4440-9be6-9399c45ff882","Type":"ContainerStarted","Data":"97976152a9675040bf5595084c2cd220d312a253cf0b6d407011a9f3e11a4ba3"} Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.683607 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f4c7-account-create-update-vp8hz" event={"ID":"68715bdb-bc11-4440-9be6-9399c45ff882","Type":"ContainerStarted","Data":"0721dbfca8b5d4012609f612bcabeadd01c64528933bbf4bdef811a0182b82dd"} Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.695765 4980 generic.go:334] "Generic (PLEG): container finished" podID="6626e80c-7ae0-45cc-a3f3-77d55d176b86" containerID="9b7677ed8a7edda305985be3d206c3f68e6dbf6e401f01e72e94d071f1fc969c" exitCode=0 Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.697056 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zcdps" event={"ID":"6626e80c-7ae0-45cc-a3f3-77d55d176b86","Type":"ContainerDied","Data":"9b7677ed8a7edda305985be3d206c3f68e6dbf6e401f01e72e94d071f1fc969c"} Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.700863 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c78e159b-369b-4085-8d58-71e513b0db32","Type":"ContainerStarted","Data":"91a7050a816a7424ee9b7434d9fe2172e763d1033c756839bc21dd3bf449ffe3"} Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.705248 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-f4c7-account-create-update-vp8hz" podStartSLOduration=2.705231683 podStartE2EDuration="2.705231683s" podCreationTimestamp="2026-01-07 03:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:52:07.699591117 +0000 UTC m=+1174.265285852" watchObservedRunningTime="2026-01-07 03:52:07.705231683 +0000 UTC m=+1174.270926418" Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.707683 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-771b-account-create-update-gnf2t" event={"ID":"16c025a9-1e74-4b76-aa08-b409e8ebfeda","Type":"ContainerStarted","Data":"ff198520c6314d0b50ef700eebd3491f957d7650c8ce83452134a91b06a51184"} Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.707733 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-771b-account-create-update-gnf2t" event={"ID":"16c025a9-1e74-4b76-aa08-b409e8ebfeda","Type":"ContainerStarted","Data":"9d429703586a15439f9aa2fda5966f0181a171cc6d5d5a3b5c28b4406da376b2"} Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.710050 4980 generic.go:334] "Generic (PLEG): container finished" podID="a806c806-4d43-4a04-aefa-0544f2a5175f" containerID="13a8448e218af0221ddaf7496f2b718343f57d58d232fbf8f3b7827312911de5" exitCode=137 Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.710122 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547b7ddd64-7hclw" event={"ID":"a806c806-4d43-4a04-aefa-0544f2a5175f","Type":"ContainerDied","Data":"13a8448e218af0221ddaf7496f2b718343f57d58d232fbf8f3b7827312911de5"} Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.717802 4980 generic.go:334] "Generic (PLEG): container finished" podID="e7d18756-8ad4-44c7-8ff0-5cd67e7d8585" containerID="3d3ef7ea01590bfe0a140cd40b0526247be5d8f57d38a2bfb49e02e8b294e640" exitCode=0 Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.717869 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8f6ft" event={"ID":"e7d18756-8ad4-44c7-8ff0-5cd67e7d8585","Type":"ContainerDied","Data":"3d3ef7ea01590bfe0a140cd40b0526247be5d8f57d38a2bfb49e02e8b294e640"} Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.717900 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8f6ft" event={"ID":"e7d18756-8ad4-44c7-8ff0-5cd67e7d8585","Type":"ContainerStarted","Data":"f21d0dff2783b0ac24ade60b7123f1405ecb5951ed360d0aded22750c7fd36d5"} Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.719147 4980 generic.go:334] "Generic (PLEG): container finished" podID="f8e7d535-d7b0-4742-9f6a-f45e56965313" containerID="fff9fde2317928b1ed7c5f1bd8ec2077e6898f82076a6bf6ae4bb5f8c70e0746" exitCode=0 Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.719185 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jlvxp" event={"ID":"f8e7d535-d7b0-4742-9f6a-f45e56965313","Type":"ContainerDied","Data":"fff9fde2317928b1ed7c5f1bd8ec2077e6898f82076a6bf6ae4bb5f8c70e0746"} Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.758716 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7aa2c32-8209-48d0-8757-34e5bf86dd75" path="/var/lib/kubelet/pods/c7aa2c32-8209-48d0-8757-34e5bf86dd75/volumes" Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.760756 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ad2-account-create-update-wglzx" event={"ID":"7ae9ab52-7706-4348-ba49-0c7d0e884dda","Type":"ContainerStarted","Data":"d54a85d19796047450e3f1ba149298749348948eb059cf069b534f96502e6ed6"} Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.760791 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ad2-account-create-update-wglzx" event={"ID":"7ae9ab52-7706-4348-ba49-0c7d0e884dda","Type":"ContainerStarted","Data":"172fefe7e5184f3839193895da098c4beadf52a9e879fb6356bd71ff14fd9e1d"} Jan 07 03:52:07 crc kubenswrapper[4980]: I0107 03:52:07.871837 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.039700 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a806c806-4d43-4a04-aefa-0544f2a5175f-logs\") pod \"a806c806-4d43-4a04-aefa-0544f2a5175f\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.040015 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a806c806-4d43-4a04-aefa-0544f2a5175f-horizon-tls-certs\") pod \"a806c806-4d43-4a04-aefa-0544f2a5175f\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.040095 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr6mb\" (UniqueName: \"kubernetes.io/projected/a806c806-4d43-4a04-aefa-0544f2a5175f-kube-api-access-nr6mb\") pod \"a806c806-4d43-4a04-aefa-0544f2a5175f\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.040118 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a806c806-4d43-4a04-aefa-0544f2a5175f-combined-ca-bundle\") pod \"a806c806-4d43-4a04-aefa-0544f2a5175f\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.040150 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a806c806-4d43-4a04-aefa-0544f2a5175f-scripts\") pod \"a806c806-4d43-4a04-aefa-0544f2a5175f\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.040175 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a806c806-4d43-4a04-aefa-0544f2a5175f-horizon-secret-key\") pod \"a806c806-4d43-4a04-aefa-0544f2a5175f\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.040173 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a806c806-4d43-4a04-aefa-0544f2a5175f-logs" (OuterVolumeSpecName: "logs") pod "a806c806-4d43-4a04-aefa-0544f2a5175f" (UID: "a806c806-4d43-4a04-aefa-0544f2a5175f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.040325 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a806c806-4d43-4a04-aefa-0544f2a5175f-config-data\") pod \"a806c806-4d43-4a04-aefa-0544f2a5175f\" (UID: \"a806c806-4d43-4a04-aefa-0544f2a5175f\") " Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.041353 4980 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a806c806-4d43-4a04-aefa-0544f2a5175f-logs\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.045819 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a806c806-4d43-4a04-aefa-0544f2a5175f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a806c806-4d43-4a04-aefa-0544f2a5175f" (UID: "a806c806-4d43-4a04-aefa-0544f2a5175f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.046267 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a806c806-4d43-4a04-aefa-0544f2a5175f-kube-api-access-nr6mb" (OuterVolumeSpecName: "kube-api-access-nr6mb") pod "a806c806-4d43-4a04-aefa-0544f2a5175f" (UID: "a806c806-4d43-4a04-aefa-0544f2a5175f"). InnerVolumeSpecName "kube-api-access-nr6mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.063997 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a806c806-4d43-4a04-aefa-0544f2a5175f-scripts" (OuterVolumeSpecName: "scripts") pod "a806c806-4d43-4a04-aefa-0544f2a5175f" (UID: "a806c806-4d43-4a04-aefa-0544f2a5175f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.074633 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a806c806-4d43-4a04-aefa-0544f2a5175f-config-data" (OuterVolumeSpecName: "config-data") pod "a806c806-4d43-4a04-aefa-0544f2a5175f" (UID: "a806c806-4d43-4a04-aefa-0544f2a5175f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.081838 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a806c806-4d43-4a04-aefa-0544f2a5175f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a806c806-4d43-4a04-aefa-0544f2a5175f" (UID: "a806c806-4d43-4a04-aefa-0544f2a5175f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.119755 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a806c806-4d43-4a04-aefa-0544f2a5175f-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "a806c806-4d43-4a04-aefa-0544f2a5175f" (UID: "a806c806-4d43-4a04-aefa-0544f2a5175f"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.144080 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a806c806-4d43-4a04-aefa-0544f2a5175f-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.144113 4980 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a806c806-4d43-4a04-aefa-0544f2a5175f-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.144124 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr6mb\" (UniqueName: \"kubernetes.io/projected/a806c806-4d43-4a04-aefa-0544f2a5175f-kube-api-access-nr6mb\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.144133 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a806c806-4d43-4a04-aefa-0544f2a5175f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.144142 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a806c806-4d43-4a04-aefa-0544f2a5175f-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.144153 4980 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a806c806-4d43-4a04-aefa-0544f2a5175f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.749495 4980 generic.go:334] "Generic (PLEG): container finished" podID="5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" containerID="17de836762208ae15920ea22b9836e1567d254d8d296e43e52ac798bb595a9a1" exitCode=143 Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.750761 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0","Type":"ContainerDied","Data":"17de836762208ae15920ea22b9836e1567d254d8d296e43e52ac798bb595a9a1"} Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.759140 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c78e159b-369b-4085-8d58-71e513b0db32","Type":"ContainerStarted","Data":"6ec31906b6371701d0b5710d4b1315dc06c057b1e7780fd686467e2760884358"} Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.759200 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c78e159b-369b-4085-8d58-71e513b0db32","Type":"ContainerStarted","Data":"af7abc49fce6bc75f433cfc39e08af97c0daf1b4218b68e1f0598d0c63b32268"} Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.761862 4980 generic.go:334] "Generic (PLEG): container finished" podID="16c025a9-1e74-4b76-aa08-b409e8ebfeda" containerID="ff198520c6314d0b50ef700eebd3491f957d7650c8ce83452134a91b06a51184" exitCode=0 Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.761942 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-771b-account-create-update-gnf2t" event={"ID":"16c025a9-1e74-4b76-aa08-b409e8ebfeda","Type":"ContainerDied","Data":"ff198520c6314d0b50ef700eebd3491f957d7650c8ce83452134a91b06a51184"} Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.765854 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547b7ddd64-7hclw" event={"ID":"a806c806-4d43-4a04-aefa-0544f2a5175f","Type":"ContainerDied","Data":"32236fa005cbe821581b4f8f9bcf30d0c456a66b7437388b772ee50c3404ac33"} Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.765905 4980 scope.go:117] "RemoveContainer" containerID="cd80c28616f3a7a48c0607d28c1a8354eb753e72520c1bfc5ee438419f2ad8e7" Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.766148 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-547b7ddd64-7hclw" Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.778948 4980 generic.go:334] "Generic (PLEG): container finished" podID="7ae9ab52-7706-4348-ba49-0c7d0e884dda" containerID="d54a85d19796047450e3f1ba149298749348948eb059cf069b534f96502e6ed6" exitCode=0 Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.779018 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ad2-account-create-update-wglzx" event={"ID":"7ae9ab52-7706-4348-ba49-0c7d0e884dda","Type":"ContainerDied","Data":"d54a85d19796047450e3f1ba149298749348948eb059cf069b534f96502e6ed6"} Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.797112 4980 generic.go:334] "Generic (PLEG): container finished" podID="68715bdb-bc11-4440-9be6-9399c45ff882" containerID="97976152a9675040bf5595084c2cd220d312a253cf0b6d407011a9f3e11a4ba3" exitCode=0 Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.797234 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f4c7-account-create-update-vp8hz" event={"ID":"68715bdb-bc11-4440-9be6-9399c45ff882","Type":"ContainerDied","Data":"97976152a9675040bf5595084c2cd220d312a253cf0b6d407011a9f3e11a4ba3"} Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.825019 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-547b7ddd64-7hclw"] Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.830968 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-547b7ddd64-7hclw"] Jan 07 03:52:08 crc kubenswrapper[4980]: I0107 03:52:08.986796 4980 scope.go:117] "RemoveContainer" containerID="13a8448e218af0221ddaf7496f2b718343f57d58d232fbf8f3b7827312911de5" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.412190 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-771b-account-create-update-gnf2t" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.543970 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jlvxp" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.565138 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8f6ft" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.586337 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16c025a9-1e74-4b76-aa08-b409e8ebfeda-operator-scripts\") pod \"16c025a9-1e74-4b76-aa08-b409e8ebfeda\" (UID: \"16c025a9-1e74-4b76-aa08-b409e8ebfeda\") " Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.586505 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvx7n\" (UniqueName: \"kubernetes.io/projected/16c025a9-1e74-4b76-aa08-b409e8ebfeda-kube-api-access-nvx7n\") pod \"16c025a9-1e74-4b76-aa08-b409e8ebfeda\" (UID: \"16c025a9-1e74-4b76-aa08-b409e8ebfeda\") " Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.587088 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c025a9-1e74-4b76-aa08-b409e8ebfeda-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "16c025a9-1e74-4b76-aa08-b409e8ebfeda" (UID: "16c025a9-1e74-4b76-aa08-b409e8ebfeda"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.587462 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16c025a9-1e74-4b76-aa08-b409e8ebfeda-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.597630 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c025a9-1e74-4b76-aa08-b409e8ebfeda-kube-api-access-nvx7n" (OuterVolumeSpecName: "kube-api-access-nvx7n") pod "16c025a9-1e74-4b76-aa08-b409e8ebfeda" (UID: "16c025a9-1e74-4b76-aa08-b409e8ebfeda"). InnerVolumeSpecName "kube-api-access-nvx7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.655661 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ad2-account-create-update-wglzx" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.660599 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zcdps" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.690692 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8e7d535-d7b0-4742-9f6a-f45e56965313-operator-scripts\") pod \"f8e7d535-d7b0-4742-9f6a-f45e56965313\" (UID: \"f8e7d535-d7b0-4742-9f6a-f45e56965313\") " Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.690845 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phjg7\" (UniqueName: \"kubernetes.io/projected/e7d18756-8ad4-44c7-8ff0-5cd67e7d8585-kube-api-access-phjg7\") pod \"e7d18756-8ad4-44c7-8ff0-5cd67e7d8585\" (UID: \"e7d18756-8ad4-44c7-8ff0-5cd67e7d8585\") " Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.690979 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpttr\" (UniqueName: \"kubernetes.io/projected/f8e7d535-d7b0-4742-9f6a-f45e56965313-kube-api-access-fpttr\") pod \"f8e7d535-d7b0-4742-9f6a-f45e56965313\" (UID: \"f8e7d535-d7b0-4742-9f6a-f45e56965313\") " Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.691007 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7d18756-8ad4-44c7-8ff0-5cd67e7d8585-operator-scripts\") pod \"e7d18756-8ad4-44c7-8ff0-5cd67e7d8585\" (UID: \"e7d18756-8ad4-44c7-8ff0-5cd67e7d8585\") " Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.691865 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvx7n\" (UniqueName: \"kubernetes.io/projected/16c025a9-1e74-4b76-aa08-b409e8ebfeda-kube-api-access-nvx7n\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.693009 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7d18756-8ad4-44c7-8ff0-5cd67e7d8585-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7d18756-8ad4-44c7-8ff0-5cd67e7d8585" (UID: "e7d18756-8ad4-44c7-8ff0-5cd67e7d8585"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.693681 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e7d535-d7b0-4742-9f6a-f45e56965313-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8e7d535-d7b0-4742-9f6a-f45e56965313" (UID: "f8e7d535-d7b0-4742-9f6a-f45e56965313"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.701972 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7d18756-8ad4-44c7-8ff0-5cd67e7d8585-kube-api-access-phjg7" (OuterVolumeSpecName: "kube-api-access-phjg7") pod "e7d18756-8ad4-44c7-8ff0-5cd67e7d8585" (UID: "e7d18756-8ad4-44c7-8ff0-5cd67e7d8585"). InnerVolumeSpecName "kube-api-access-phjg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.708893 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e7d535-d7b0-4742-9f6a-f45e56965313-kube-api-access-fpttr" (OuterVolumeSpecName: "kube-api-access-fpttr") pod "f8e7d535-d7b0-4742-9f6a-f45e56965313" (UID: "f8e7d535-d7b0-4742-9f6a-f45e56965313"). InnerVolumeSpecName "kube-api-access-fpttr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.747108 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a806c806-4d43-4a04-aefa-0544f2a5175f" path="/var/lib/kubelet/pods/a806c806-4d43-4a04-aefa-0544f2a5175f/volumes" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.793225 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6626e80c-7ae0-45cc-a3f3-77d55d176b86-operator-scripts\") pod \"6626e80c-7ae0-45cc-a3f3-77d55d176b86\" (UID: \"6626e80c-7ae0-45cc-a3f3-77d55d176b86\") " Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.793273 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs8gm\" (UniqueName: \"kubernetes.io/projected/7ae9ab52-7706-4348-ba49-0c7d0e884dda-kube-api-access-rs8gm\") pod \"7ae9ab52-7706-4348-ba49-0c7d0e884dda\" (UID: \"7ae9ab52-7706-4348-ba49-0c7d0e884dda\") " Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.793334 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae9ab52-7706-4348-ba49-0c7d0e884dda-operator-scripts\") pod \"7ae9ab52-7706-4348-ba49-0c7d0e884dda\" (UID: \"7ae9ab52-7706-4348-ba49-0c7d0e884dda\") " Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.793366 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9b8k\" (UniqueName: \"kubernetes.io/projected/6626e80c-7ae0-45cc-a3f3-77d55d176b86-kube-api-access-p9b8k\") pod \"6626e80c-7ae0-45cc-a3f3-77d55d176b86\" (UID: \"6626e80c-7ae0-45cc-a3f3-77d55d176b86\") " Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.793754 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpttr\" (UniqueName: \"kubernetes.io/projected/f8e7d535-d7b0-4742-9f6a-f45e56965313-kube-api-access-fpttr\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.793768 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7d18756-8ad4-44c7-8ff0-5cd67e7d8585-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.793778 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8e7d535-d7b0-4742-9f6a-f45e56965313-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.793787 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phjg7\" (UniqueName: \"kubernetes.io/projected/e7d18756-8ad4-44c7-8ff0-5cd67e7d8585-kube-api-access-phjg7\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.797841 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae9ab52-7706-4348-ba49-0c7d0e884dda-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ae9ab52-7706-4348-ba49-0c7d0e884dda" (UID: "7ae9ab52-7706-4348-ba49-0c7d0e884dda"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.797963 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6626e80c-7ae0-45cc-a3f3-77d55d176b86-kube-api-access-p9b8k" (OuterVolumeSpecName: "kube-api-access-p9b8k") pod "6626e80c-7ae0-45cc-a3f3-77d55d176b86" (UID: "6626e80c-7ae0-45cc-a3f3-77d55d176b86"). InnerVolumeSpecName "kube-api-access-p9b8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.798179 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6626e80c-7ae0-45cc-a3f3-77d55d176b86-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6626e80c-7ae0-45cc-a3f3-77d55d176b86" (UID: "6626e80c-7ae0-45cc-a3f3-77d55d176b86"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.798354 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae9ab52-7706-4348-ba49-0c7d0e884dda-kube-api-access-rs8gm" (OuterVolumeSpecName: "kube-api-access-rs8gm") pod "7ae9ab52-7706-4348-ba49-0c7d0e884dda" (UID: "7ae9ab52-7706-4348-ba49-0c7d0e884dda"). InnerVolumeSpecName "kube-api-access-rs8gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.827663 4980 generic.go:334] "Generic (PLEG): container finished" podID="35351b71-653b-4428-8ced-16202fce5e62" containerID="4bdcc63e830c668157f4080e416db51ae79d8abfb508c653be5b1436216dc8a5" exitCode=0 Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.829513 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"35351b71-653b-4428-8ced-16202fce5e62","Type":"ContainerDied","Data":"4bdcc63e830c668157f4080e416db51ae79d8abfb508c653be5b1436216dc8a5"} Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.833473 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8f6ft" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.834324 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8f6ft" event={"ID":"e7d18756-8ad4-44c7-8ff0-5cd67e7d8585","Type":"ContainerDied","Data":"f21d0dff2783b0ac24ade60b7123f1405ecb5951ed360d0aded22750c7fd36d5"} Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.834437 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f21d0dff2783b0ac24ade60b7123f1405ecb5951ed360d0aded22750c7fd36d5" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.845068 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.856835 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jlvxp" event={"ID":"f8e7d535-d7b0-4742-9f6a-f45e56965313","Type":"ContainerDied","Data":"d42851be29f2c220537d779ffc531b3a2fc1bbf0e62afa88fbd001daf8cf7ee8"} Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.856864 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jlvxp" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.863114 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ad2-account-create-update-wglzx" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.868729 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d42851be29f2c220537d779ffc531b3a2fc1bbf0e62afa88fbd001daf8cf7ee8" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.868902 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ad2-account-create-update-wglzx" event={"ID":"7ae9ab52-7706-4348-ba49-0c7d0e884dda","Type":"ContainerDied","Data":"172fefe7e5184f3839193895da098c4beadf52a9e879fb6356bd71ff14fd9e1d"} Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.869011 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="172fefe7e5184f3839193895da098c4beadf52a9e879fb6356bd71ff14fd9e1d" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.873619 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zcdps" event={"ID":"6626e80c-7ae0-45cc-a3f3-77d55d176b86","Type":"ContainerDied","Data":"dc1e3cc62f5be2319a33038057033222d41e0031695d82fcf6a854fcb1814df7"} Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.873693 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc1e3cc62f5be2319a33038057033222d41e0031695d82fcf6a854fcb1814df7" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.873800 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zcdps" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.891336 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c78e159b-369b-4085-8d58-71e513b0db32","Type":"ContainerStarted","Data":"6deda0d72313375fb87a6906c71526f0c7905ab1cb8c075bd0545fcdb5dcff00"} Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.897540 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6626e80c-7ae0-45cc-a3f3-77d55d176b86-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.897587 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs8gm\" (UniqueName: \"kubernetes.io/projected/7ae9ab52-7706-4348-ba49-0c7d0e884dda-kube-api-access-rs8gm\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.897597 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae9ab52-7706-4348-ba49-0c7d0e884dda-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.897606 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9b8k\" (UniqueName: \"kubernetes.io/projected/6626e80c-7ae0-45cc-a3f3-77d55d176b86-kube-api-access-p9b8k\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.904759 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-771b-account-create-update-gnf2t" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.904796 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-771b-account-create-update-gnf2t" event={"ID":"16c025a9-1e74-4b76-aa08-b409e8ebfeda","Type":"ContainerDied","Data":"9d429703586a15439f9aa2fda5966f0181a171cc6d5d5a3b5c28b4406da376b2"} Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.904838 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d429703586a15439f9aa2fda5966f0181a171cc6d5d5a3b5c28b4406da376b2" Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.937421 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.998928 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-combined-ca-bundle\") pod \"35351b71-653b-4428-8ced-16202fce5e62\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.999015 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35351b71-653b-4428-8ced-16202fce5e62-logs\") pod \"35351b71-653b-4428-8ced-16202fce5e62\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.999044 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-scripts\") pod \"35351b71-653b-4428-8ced-16202fce5e62\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.999073 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-config-data\") pod \"35351b71-653b-4428-8ced-16202fce5e62\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.999129 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-public-tls-certs\") pod \"35351b71-653b-4428-8ced-16202fce5e62\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.999175 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35351b71-653b-4428-8ced-16202fce5e62-httpd-run\") pod \"35351b71-653b-4428-8ced-16202fce5e62\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.999195 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"35351b71-653b-4428-8ced-16202fce5e62\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " Jan 07 03:52:09 crc kubenswrapper[4980]: I0107 03:52:09.999276 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff27g\" (UniqueName: \"kubernetes.io/projected/35351b71-653b-4428-8ced-16202fce5e62-kube-api-access-ff27g\") pod \"35351b71-653b-4428-8ced-16202fce5e62\" (UID: \"35351b71-653b-4428-8ced-16202fce5e62\") " Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.005873 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35351b71-653b-4428-8ced-16202fce5e62-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "35351b71-653b-4428-8ced-16202fce5e62" (UID: "35351b71-653b-4428-8ced-16202fce5e62"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.007170 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35351b71-653b-4428-8ced-16202fce5e62-logs" (OuterVolumeSpecName: "logs") pod "35351b71-653b-4428-8ced-16202fce5e62" (UID: "35351b71-653b-4428-8ced-16202fce5e62"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.009804 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35351b71-653b-4428-8ced-16202fce5e62-kube-api-access-ff27g" (OuterVolumeSpecName: "kube-api-access-ff27g") pod "35351b71-653b-4428-8ced-16202fce5e62" (UID: "35351b71-653b-4428-8ced-16202fce5e62"). InnerVolumeSpecName "kube-api-access-ff27g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.010591 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-scripts" (OuterVolumeSpecName: "scripts") pod "35351b71-653b-4428-8ced-16202fce5e62" (UID: "35351b71-653b-4428-8ced-16202fce5e62"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.030535 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "35351b71-653b-4428-8ced-16202fce5e62" (UID: "35351b71-653b-4428-8ced-16202fce5e62"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.045320 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35351b71-653b-4428-8ced-16202fce5e62" (UID: "35351b71-653b-4428-8ced-16202fce5e62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.071740 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "35351b71-653b-4428-8ced-16202fce5e62" (UID: "35351b71-653b-4428-8ced-16202fce5e62"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.088313 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-config-data" (OuterVolumeSpecName: "config-data") pod "35351b71-653b-4428-8ced-16202fce5e62" (UID: "35351b71-653b-4428-8ced-16202fce5e62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.107018 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.107050 4980 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35351b71-653b-4428-8ced-16202fce5e62-logs\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.109327 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.109363 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.109375 4980 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35351b71-653b-4428-8ced-16202fce5e62-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.109387 4980 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35351b71-653b-4428-8ced-16202fce5e62-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.109417 4980 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.109431 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff27g\" (UniqueName: \"kubernetes.io/projected/35351b71-653b-4428-8ced-16202fce5e62-kube-api-access-ff27g\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.142790 4980 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.210690 4980 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.331312 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f4c7-account-create-update-vp8hz" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.416528 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8xf5\" (UniqueName: \"kubernetes.io/projected/68715bdb-bc11-4440-9be6-9399c45ff882-kube-api-access-b8xf5\") pod \"68715bdb-bc11-4440-9be6-9399c45ff882\" (UID: \"68715bdb-bc11-4440-9be6-9399c45ff882\") " Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.420744 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68715bdb-bc11-4440-9be6-9399c45ff882-kube-api-access-b8xf5" (OuterVolumeSpecName: "kube-api-access-b8xf5") pod "68715bdb-bc11-4440-9be6-9399c45ff882" (UID: "68715bdb-bc11-4440-9be6-9399c45ff882"). InnerVolumeSpecName "kube-api-access-b8xf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.518938 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68715bdb-bc11-4440-9be6-9399c45ff882-operator-scripts\") pod \"68715bdb-bc11-4440-9be6-9399c45ff882\" (UID: \"68715bdb-bc11-4440-9be6-9399c45ff882\") " Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.519462 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68715bdb-bc11-4440-9be6-9399c45ff882-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68715bdb-bc11-4440-9be6-9399c45ff882" (UID: "68715bdb-bc11-4440-9be6-9399c45ff882"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.519494 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8xf5\" (UniqueName: \"kubernetes.io/projected/68715bdb-bc11-4440-9be6-9399c45ff882-kube-api-access-b8xf5\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.621700 4980 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68715bdb-bc11-4440-9be6-9399c45ff882-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.912267 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f4c7-account-create-update-vp8hz" event={"ID":"68715bdb-bc11-4440-9be6-9399c45ff882","Type":"ContainerDied","Data":"0721dbfca8b5d4012609f612bcabeadd01c64528933bbf4bdef811a0182b82dd"} Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.912310 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0721dbfca8b5d4012609f612bcabeadd01c64528933bbf4bdef811a0182b82dd" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.912340 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f4c7-account-create-update-vp8hz" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.913579 4980 generic.go:334] "Generic (PLEG): container finished" podID="5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" containerID="8baa92e6dd0791333bf2483745673cd8a263959940c42a2d3c407410b2c97cdb" exitCode=0 Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.913638 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0","Type":"ContainerDied","Data":"8baa92e6dd0791333bf2483745673cd8a263959940c42a2d3c407410b2c97cdb"} Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.925102 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"35351b71-653b-4428-8ced-16202fce5e62","Type":"ContainerDied","Data":"5c7e991e1113a15b46f9dde4f43897affd5dc72297b9b29557c468b7b30b2d86"} Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.925152 4980 scope.go:117] "RemoveContainer" containerID="4bdcc63e830c668157f4080e416db51ae79d8abfb508c653be5b1436216dc8a5" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.925277 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.983963 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 07 03:52:10 crc kubenswrapper[4980]: I0107 03:52:10.994085 4980 scope.go:117] "RemoveContainer" containerID="cca09f3c7a1803a2e3add2b5b583ee11a0a446886476b6e3b81db5147efc7fc0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.005347 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.025752 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 07 03:52:11 crc kubenswrapper[4980]: E0107 03:52:11.026137 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35351b71-653b-4428-8ced-16202fce5e62" containerName="glance-httpd" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026148 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="35351b71-653b-4428-8ced-16202fce5e62" containerName="glance-httpd" Jan 07 03:52:11 crc kubenswrapper[4980]: E0107 03:52:11.026170 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a806c806-4d43-4a04-aefa-0544f2a5175f" containerName="horizon-log" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026178 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="a806c806-4d43-4a04-aefa-0544f2a5175f" containerName="horizon-log" Jan 07 03:52:11 crc kubenswrapper[4980]: E0107 03:52:11.026187 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a806c806-4d43-4a04-aefa-0544f2a5175f" containerName="horizon" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026193 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="a806c806-4d43-4a04-aefa-0544f2a5175f" containerName="horizon" Jan 07 03:52:11 crc kubenswrapper[4980]: E0107 03:52:11.026205 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35351b71-653b-4428-8ced-16202fce5e62" containerName="glance-log" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026211 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="35351b71-653b-4428-8ced-16202fce5e62" containerName="glance-log" Jan 07 03:52:11 crc kubenswrapper[4980]: E0107 03:52:11.026227 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68715bdb-bc11-4440-9be6-9399c45ff882" containerName="mariadb-account-create-update" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026234 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="68715bdb-bc11-4440-9be6-9399c45ff882" containerName="mariadb-account-create-update" Jan 07 03:52:11 crc kubenswrapper[4980]: E0107 03:52:11.026244 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e7d535-d7b0-4742-9f6a-f45e56965313" containerName="mariadb-database-create" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026250 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e7d535-d7b0-4742-9f6a-f45e56965313" containerName="mariadb-database-create" Jan 07 03:52:11 crc kubenswrapper[4980]: E0107 03:52:11.026263 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c025a9-1e74-4b76-aa08-b409e8ebfeda" containerName="mariadb-account-create-update" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026268 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c025a9-1e74-4b76-aa08-b409e8ebfeda" containerName="mariadb-account-create-update" Jan 07 03:52:11 crc kubenswrapper[4980]: E0107 03:52:11.026276 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae9ab52-7706-4348-ba49-0c7d0e884dda" containerName="mariadb-account-create-update" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026282 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae9ab52-7706-4348-ba49-0c7d0e884dda" containerName="mariadb-account-create-update" Jan 07 03:52:11 crc kubenswrapper[4980]: E0107 03:52:11.026289 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6626e80c-7ae0-45cc-a3f3-77d55d176b86" containerName="mariadb-database-create" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026295 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="6626e80c-7ae0-45cc-a3f3-77d55d176b86" containerName="mariadb-database-create" Jan 07 03:52:11 crc kubenswrapper[4980]: E0107 03:52:11.026303 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d18756-8ad4-44c7-8ff0-5cd67e7d8585" containerName="mariadb-database-create" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026309 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d18756-8ad4-44c7-8ff0-5cd67e7d8585" containerName="mariadb-database-create" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026468 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="35351b71-653b-4428-8ced-16202fce5e62" containerName="glance-httpd" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026480 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d18756-8ad4-44c7-8ff0-5cd67e7d8585" containerName="mariadb-database-create" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026490 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="68715bdb-bc11-4440-9be6-9399c45ff882" containerName="mariadb-account-create-update" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026502 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="a806c806-4d43-4a04-aefa-0544f2a5175f" containerName="horizon" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026512 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c025a9-1e74-4b76-aa08-b409e8ebfeda" containerName="mariadb-account-create-update" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026521 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="35351b71-653b-4428-8ced-16202fce5e62" containerName="glance-log" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026532 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e7d535-d7b0-4742-9f6a-f45e56965313" containerName="mariadb-database-create" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026541 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="a806c806-4d43-4a04-aefa-0544f2a5175f" containerName="horizon-log" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026603 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="6626e80c-7ae0-45cc-a3f3-77d55d176b86" containerName="mariadb-database-create" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.026615 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae9ab52-7706-4348-ba49-0c7d0e884dda" containerName="mariadb-account-create-update" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.027522 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.034318 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.034412 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.045054 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.137689 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f122c82e-51a4-4b1c-8457-02b12f045c52-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.137765 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.137786 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f122c82e-51a4-4b1c-8457-02b12f045c52-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.137804 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f122c82e-51a4-4b1c-8457-02b12f045c52-logs\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.137868 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b25p\" (UniqueName: \"kubernetes.io/projected/f122c82e-51a4-4b1c-8457-02b12f045c52-kube-api-access-5b25p\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.137890 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f122c82e-51a4-4b1c-8457-02b12f045c52-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.137904 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f122c82e-51a4-4b1c-8457-02b12f045c52-scripts\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.137929 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f122c82e-51a4-4b1c-8457-02b12f045c52-config-data\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.239352 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b25p\" (UniqueName: \"kubernetes.io/projected/f122c82e-51a4-4b1c-8457-02b12f045c52-kube-api-access-5b25p\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.239414 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f122c82e-51a4-4b1c-8457-02b12f045c52-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.239438 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f122c82e-51a4-4b1c-8457-02b12f045c52-scripts\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.239480 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f122c82e-51a4-4b1c-8457-02b12f045c52-config-data\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.239586 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f122c82e-51a4-4b1c-8457-02b12f045c52-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.239637 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.239660 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f122c82e-51a4-4b1c-8457-02b12f045c52-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.239687 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f122c82e-51a4-4b1c-8457-02b12f045c52-logs\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.240065 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f122c82e-51a4-4b1c-8457-02b12f045c52-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.240221 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f122c82e-51a4-4b1c-8457-02b12f045c52-logs\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.240237 4980 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.248455 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f122c82e-51a4-4b1c-8457-02b12f045c52-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.248884 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f122c82e-51a4-4b1c-8457-02b12f045c52-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.250227 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f122c82e-51a4-4b1c-8457-02b12f045c52-scripts\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.250763 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f122c82e-51a4-4b1c-8457-02b12f045c52-config-data\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.251697 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.263344 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b25p\" (UniqueName: \"kubernetes.io/projected/f122c82e-51a4-4b1c-8457-02b12f045c52-kube-api-access-5b25p\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.313035 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f122c82e-51a4-4b1c-8457-02b12f045c52\") " pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.341289 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-logs\") pod \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.341369 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-internal-tls-certs\") pod \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.341475 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-combined-ca-bundle\") pod \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.341565 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkh7z\" (UniqueName: \"kubernetes.io/projected/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-kube-api-access-pkh7z\") pod \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.341628 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-scripts\") pod \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.341694 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.341728 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-httpd-run\") pod \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.341757 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-config-data\") pod \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\" (UID: \"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0\") " Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.342111 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-logs" (OuterVolumeSpecName: "logs") pod "5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" (UID: "5f503dff-c741-4e74-a5f6-ac2aba4eb9f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.342818 4980 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-logs\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.347490 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-scripts" (OuterVolumeSpecName: "scripts") pod "5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" (UID: "5f503dff-c741-4e74-a5f6-ac2aba4eb9f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.358649 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.363168 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-kube-api-access-pkh7z" (OuterVolumeSpecName: "kube-api-access-pkh7z") pod "5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" (UID: "5f503dff-c741-4e74-a5f6-ac2aba4eb9f0"). InnerVolumeSpecName "kube-api-access-pkh7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.366233 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" (UID: "5f503dff-c741-4e74-a5f6-ac2aba4eb9f0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.366371 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" (UID: "5f503dff-c741-4e74-a5f6-ac2aba4eb9f0"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.394396 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" (UID: "5f503dff-c741-4e74-a5f6-ac2aba4eb9f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.418386 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" (UID: "5f503dff-c741-4e74-a5f6-ac2aba4eb9f0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.440060 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-config-data" (OuterVolumeSpecName: "config-data") pod "5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" (UID: "5f503dff-c741-4e74-a5f6-ac2aba4eb9f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.444284 4980 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.444316 4980 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.444328 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.444337 4980 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.444347 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.444357 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkh7z\" (UniqueName: \"kubernetes.io/projected/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-kube-api-access-pkh7z\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.444367 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.462130 4980 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.547825 4980 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.745270 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35351b71-653b-4428-8ced-16202fce5e62" path="/var/lib/kubelet/pods/35351b71-653b-4428-8ced-16202fce5e62/volumes" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.936061 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.949668 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f122c82e-51a4-4b1c-8457-02b12f045c52","Type":"ContainerStarted","Data":"624143a481cc9bad28971f8432fe3af4855035629195a5765a560c17f1eac297"} Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.952095 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f503dff-c741-4e74-a5f6-ac2aba4eb9f0","Type":"ContainerDied","Data":"1a6eecc54b5d93c73d5e1b0c6d570821c6822de055b9416e43bacfa4b0e81425"} Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.952136 4980 scope.go:117] "RemoveContainer" containerID="8baa92e6dd0791333bf2483745673cd8a263959940c42a2d3c407410b2c97cdb" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.952309 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.958710 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c78e159b-369b-4085-8d58-71e513b0db32","Type":"ContainerStarted","Data":"f22fc6bd6deb91d1879074fd24b3e42a7e5ace9beefcc3a317f88c471bb6d597"} Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.958880 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c78e159b-369b-4085-8d58-71e513b0db32" containerName="ceilometer-central-agent" containerID="cri-o://af7abc49fce6bc75f433cfc39e08af97c0daf1b4218b68e1f0598d0c63b32268" gracePeriod=30 Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.958961 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.958976 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c78e159b-369b-4085-8d58-71e513b0db32" containerName="sg-core" containerID="cri-o://6deda0d72313375fb87a6906c71526f0c7905ab1cb8c075bd0545fcdb5dcff00" gracePeriod=30 Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.958965 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c78e159b-369b-4085-8d58-71e513b0db32" containerName="proxy-httpd" containerID="cri-o://f22fc6bd6deb91d1879074fd24b3e42a7e5ace9beefcc3a317f88c471bb6d597" gracePeriod=30 Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.959034 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c78e159b-369b-4085-8d58-71e513b0db32" containerName="ceilometer-notification-agent" containerID="cri-o://6ec31906b6371701d0b5710d4b1315dc06c057b1e7780fd686467e2760884358" gracePeriod=30 Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.989706 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 07 03:52:11 crc kubenswrapper[4980]: I0107 03:52:11.999456 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.001671 4980 scope.go:117] "RemoveContainer" containerID="17de836762208ae15920ea22b9836e1567d254d8d296e43e52ac798bb595a9a1" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.018062 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 07 03:52:12 crc kubenswrapper[4980]: E0107 03:52:12.018605 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" containerName="glance-log" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.018630 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" containerName="glance-log" Jan 07 03:52:12 crc kubenswrapper[4980]: E0107 03:52:12.018668 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" containerName="glance-httpd" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.018676 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" containerName="glance-httpd" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.018880 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" containerName="glance-httpd" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.018908 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" containerName="glance-log" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.020285 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.023092 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.023504 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.037696 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.043988 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.264748271 podStartE2EDuration="7.043969153s" podCreationTimestamp="2026-01-07 03:52:05 +0000 UTC" firstStartedPulling="2026-01-07 03:52:07.132292298 +0000 UTC m=+1173.697987033" lastFinishedPulling="2026-01-07 03:52:10.91151318 +0000 UTC m=+1177.477207915" observedRunningTime="2026-01-07 03:52:12.002238601 +0000 UTC m=+1178.567933336" watchObservedRunningTime="2026-01-07 03:52:12.043969153 +0000 UTC m=+1178.609663888" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.159744 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.159847 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrfqw\" (UniqueName: \"kubernetes.io/projected/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-kube-api-access-zrfqw\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.159887 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.159907 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.159938 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.159959 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.159980 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-logs\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.160000 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.261193 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.261642 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrfqw\" (UniqueName: \"kubernetes.io/projected/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-kube-api-access-zrfqw\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.261687 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.261706 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.261739 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.261755 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.261774 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-logs\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.261789 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.261783 4980 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.263632 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-logs\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.263922 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.268613 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.269767 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.269979 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.271244 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.280587 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrfqw\" (UniqueName: \"kubernetes.io/projected/8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0-kube-api-access-zrfqw\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.294689 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0\") " pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.350733 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.920339 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 07 03:52:12 crc kubenswrapper[4980]: W0107 03:52:12.929692 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ef8bb9f_4c39_47a0_b6d8_6a20655d42a0.slice/crio-f043f7a36ba5e5c8619db6e9d0c6b443041c62c90f388c6646a74b2d70c92029 WatchSource:0}: Error finding container f043f7a36ba5e5c8619db6e9d0c6b443041c62c90f388c6646a74b2d70c92029: Status 404 returned error can't find the container with id f043f7a36ba5e5c8619db6e9d0c6b443041c62c90f388c6646a74b2d70c92029 Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.986089 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f122c82e-51a4-4b1c-8457-02b12f045c52","Type":"ContainerStarted","Data":"3bd2d55659b0b5809c144731a2d7112bcc24feac13433fd9755227989c47504d"} Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.994291 4980 generic.go:334] "Generic (PLEG): container finished" podID="c78e159b-369b-4085-8d58-71e513b0db32" containerID="f22fc6bd6deb91d1879074fd24b3e42a7e5ace9beefcc3a317f88c471bb6d597" exitCode=0 Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.994327 4980 generic.go:334] "Generic (PLEG): container finished" podID="c78e159b-369b-4085-8d58-71e513b0db32" containerID="6deda0d72313375fb87a6906c71526f0c7905ab1cb8c075bd0545fcdb5dcff00" exitCode=2 Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.994335 4980 generic.go:334] "Generic (PLEG): container finished" podID="c78e159b-369b-4085-8d58-71e513b0db32" containerID="6ec31906b6371701d0b5710d4b1315dc06c057b1e7780fd686467e2760884358" exitCode=0 Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.994382 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c78e159b-369b-4085-8d58-71e513b0db32","Type":"ContainerDied","Data":"f22fc6bd6deb91d1879074fd24b3e42a7e5ace9beefcc3a317f88c471bb6d597"} Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.994412 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c78e159b-369b-4085-8d58-71e513b0db32","Type":"ContainerDied","Data":"6deda0d72313375fb87a6906c71526f0c7905ab1cb8c075bd0545fcdb5dcff00"} Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.994422 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c78e159b-369b-4085-8d58-71e513b0db32","Type":"ContainerDied","Data":"6ec31906b6371701d0b5710d4b1315dc06c057b1e7780fd686467e2760884358"} Jan 07 03:52:12 crc kubenswrapper[4980]: I0107 03:52:12.996576 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0","Type":"ContainerStarted","Data":"f043f7a36ba5e5c8619db6e9d0c6b443041c62c90f388c6646a74b2d70c92029"} Jan 07 03:52:13 crc kubenswrapper[4980]: E0107 03:52:13.655919 4980 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d74f1e6_2229_4ff7_8c80_b12d09285da4.slice/crio-980c543c3acef5c2165ccf6a21093814295d80a7c1dcb9e72159c8e4bea6b851\": RecentStats: unable to find data in memory cache]" Jan 07 03:52:13 crc kubenswrapper[4980]: I0107 03:52:13.763879 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f503dff-c741-4e74-a5f6-ac2aba4eb9f0" path="/var/lib/kubelet/pods/5f503dff-c741-4e74-a5f6-ac2aba4eb9f0/volumes" Jan 07 03:52:14 crc kubenswrapper[4980]: I0107 03:52:14.007273 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0","Type":"ContainerStarted","Data":"82a68aab76c91e6b0d90416869e4d3fe39f591a92cf75d6601e6d61adac82a54"} Jan 07 03:52:14 crc kubenswrapper[4980]: I0107 03:52:14.012754 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f122c82e-51a4-4b1c-8457-02b12f045c52","Type":"ContainerStarted","Data":"8a67da294d2ce46618a7a5798f6c7e4503782940dc310120933a94e9337b36df"} Jan 07 03:52:14 crc kubenswrapper[4980]: I0107 03:52:14.030924 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.030908567 podStartE2EDuration="4.030908567s" podCreationTimestamp="2026-01-07 03:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:52:14.029811202 +0000 UTC m=+1180.595505937" watchObservedRunningTime="2026-01-07 03:52:14.030908567 +0000 UTC m=+1180.596603302" Jan 07 03:52:15 crc kubenswrapper[4980]: I0107 03:52:15.025501 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0","Type":"ContainerStarted","Data":"276f78e538084fcf102ba85a887f99d74f25b4e3422fb742c326104b6c21b1b1"} Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.324328 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.324303112 podStartE2EDuration="5.324303112s" podCreationTimestamp="2026-01-07 03:52:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:52:15.057101775 +0000 UTC m=+1181.622796520" watchObservedRunningTime="2026-01-07 03:52:16.324303112 +0000 UTC m=+1182.889997857" Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.329717 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lfwjd"] Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.331078 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lfwjd" Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.332913 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.337448 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tvmgq" Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.337678 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.353045 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lfwjd"] Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.467596 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6dbf45b-58e2-4fa6-a127-d604586a3b44-scripts\") pod \"nova-cell0-conductor-db-sync-lfwjd\" (UID: \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\") " pod="openstack/nova-cell0-conductor-db-sync-lfwjd" Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.467678 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6dbf45b-58e2-4fa6-a127-d604586a3b44-config-data\") pod \"nova-cell0-conductor-db-sync-lfwjd\" (UID: \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\") " pod="openstack/nova-cell0-conductor-db-sync-lfwjd" Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.467714 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99rj9\" (UniqueName: \"kubernetes.io/projected/c6dbf45b-58e2-4fa6-a127-d604586a3b44-kube-api-access-99rj9\") pod \"nova-cell0-conductor-db-sync-lfwjd\" (UID: \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\") " pod="openstack/nova-cell0-conductor-db-sync-lfwjd" Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.467770 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6dbf45b-58e2-4fa6-a127-d604586a3b44-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lfwjd\" (UID: \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\") " pod="openstack/nova-cell0-conductor-db-sync-lfwjd" Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.569429 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6dbf45b-58e2-4fa6-a127-d604586a3b44-scripts\") pod \"nova-cell0-conductor-db-sync-lfwjd\" (UID: \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\") " pod="openstack/nova-cell0-conductor-db-sync-lfwjd" Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.569490 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6dbf45b-58e2-4fa6-a127-d604586a3b44-config-data\") pod \"nova-cell0-conductor-db-sync-lfwjd\" (UID: \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\") " pod="openstack/nova-cell0-conductor-db-sync-lfwjd" Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.569515 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99rj9\" (UniqueName: \"kubernetes.io/projected/c6dbf45b-58e2-4fa6-a127-d604586a3b44-kube-api-access-99rj9\") pod \"nova-cell0-conductor-db-sync-lfwjd\" (UID: \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\") " pod="openstack/nova-cell0-conductor-db-sync-lfwjd" Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.569570 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6dbf45b-58e2-4fa6-a127-d604586a3b44-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lfwjd\" (UID: \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\") " pod="openstack/nova-cell0-conductor-db-sync-lfwjd" Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.581573 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6dbf45b-58e2-4fa6-a127-d604586a3b44-scripts\") pod \"nova-cell0-conductor-db-sync-lfwjd\" (UID: \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\") " pod="openstack/nova-cell0-conductor-db-sync-lfwjd" Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.581655 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6dbf45b-58e2-4fa6-a127-d604586a3b44-config-data\") pod \"nova-cell0-conductor-db-sync-lfwjd\" (UID: \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\") " pod="openstack/nova-cell0-conductor-db-sync-lfwjd" Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.591240 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6dbf45b-58e2-4fa6-a127-d604586a3b44-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lfwjd\" (UID: \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\") " pod="openstack/nova-cell0-conductor-db-sync-lfwjd" Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.594744 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99rj9\" (UniqueName: \"kubernetes.io/projected/c6dbf45b-58e2-4fa6-a127-d604586a3b44-kube-api-access-99rj9\") pod \"nova-cell0-conductor-db-sync-lfwjd\" (UID: \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\") " pod="openstack/nova-cell0-conductor-db-sync-lfwjd" Jan 07 03:52:16 crc kubenswrapper[4980]: I0107 03:52:16.662737 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lfwjd" Jan 07 03:52:17 crc kubenswrapper[4980]: I0107 03:52:17.174951 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lfwjd"] Jan 07 03:52:18 crc kubenswrapper[4980]: I0107 03:52:18.059000 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lfwjd" event={"ID":"c6dbf45b-58e2-4fa6-a127-d604586a3b44","Type":"ContainerStarted","Data":"25d444c68de30c0ed3d3dbe9fb615adaef7f4af3038dab936aebf7dd971e0812"} Jan 07 03:52:18 crc kubenswrapper[4980]: I0107 03:52:18.956046 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.017126 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c78e159b-369b-4085-8d58-71e513b0db32-run-httpd\") pod \"c78e159b-369b-4085-8d58-71e513b0db32\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.017200 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwddj\" (UniqueName: \"kubernetes.io/projected/c78e159b-369b-4085-8d58-71e513b0db32-kube-api-access-kwddj\") pod \"c78e159b-369b-4085-8d58-71e513b0db32\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.017260 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-sg-core-conf-yaml\") pod \"c78e159b-369b-4085-8d58-71e513b0db32\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.017346 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-scripts\") pod \"c78e159b-369b-4085-8d58-71e513b0db32\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.017418 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-config-data\") pod \"c78e159b-369b-4085-8d58-71e513b0db32\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.017469 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-combined-ca-bundle\") pod \"c78e159b-369b-4085-8d58-71e513b0db32\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.017508 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c78e159b-369b-4085-8d58-71e513b0db32-log-httpd\") pod \"c78e159b-369b-4085-8d58-71e513b0db32\" (UID: \"c78e159b-369b-4085-8d58-71e513b0db32\") " Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.017627 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c78e159b-369b-4085-8d58-71e513b0db32-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c78e159b-369b-4085-8d58-71e513b0db32" (UID: "c78e159b-369b-4085-8d58-71e513b0db32"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.017921 4980 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c78e159b-369b-4085-8d58-71e513b0db32-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.018313 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c78e159b-369b-4085-8d58-71e513b0db32-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c78e159b-369b-4085-8d58-71e513b0db32" (UID: "c78e159b-369b-4085-8d58-71e513b0db32"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.023923 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c78e159b-369b-4085-8d58-71e513b0db32-kube-api-access-kwddj" (OuterVolumeSpecName: "kube-api-access-kwddj") pod "c78e159b-369b-4085-8d58-71e513b0db32" (UID: "c78e159b-369b-4085-8d58-71e513b0db32"). InnerVolumeSpecName "kube-api-access-kwddj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.025717 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-scripts" (OuterVolumeSpecName: "scripts") pod "c78e159b-369b-4085-8d58-71e513b0db32" (UID: "c78e159b-369b-4085-8d58-71e513b0db32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.046878 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c78e159b-369b-4085-8d58-71e513b0db32" (UID: "c78e159b-369b-4085-8d58-71e513b0db32"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.072416 4980 generic.go:334] "Generic (PLEG): container finished" podID="c78e159b-369b-4085-8d58-71e513b0db32" containerID="af7abc49fce6bc75f433cfc39e08af97c0daf1b4218b68e1f0598d0c63b32268" exitCode=0 Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.072467 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c78e159b-369b-4085-8d58-71e513b0db32","Type":"ContainerDied","Data":"af7abc49fce6bc75f433cfc39e08af97c0daf1b4218b68e1f0598d0c63b32268"} Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.072499 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c78e159b-369b-4085-8d58-71e513b0db32","Type":"ContainerDied","Data":"91a7050a816a7424ee9b7434d9fe2172e763d1033c756839bc21dd3bf449ffe3"} Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.072517 4980 scope.go:117] "RemoveContainer" containerID="f22fc6bd6deb91d1879074fd24b3e42a7e5ace9beefcc3a317f88c471bb6d597" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.072639 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.092856 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c78e159b-369b-4085-8d58-71e513b0db32" (UID: "c78e159b-369b-4085-8d58-71e513b0db32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.119236 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.119269 4980 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c78e159b-369b-4085-8d58-71e513b0db32-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.119280 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwddj\" (UniqueName: \"kubernetes.io/projected/c78e159b-369b-4085-8d58-71e513b0db32-kube-api-access-kwddj\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.119289 4980 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.119299 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.119409 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-config-data" (OuterVolumeSpecName: "config-data") pod "c78e159b-369b-4085-8d58-71e513b0db32" (UID: "c78e159b-369b-4085-8d58-71e513b0db32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.123170 4980 scope.go:117] "RemoveContainer" containerID="6deda0d72313375fb87a6906c71526f0c7905ab1cb8c075bd0545fcdb5dcff00" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.152060 4980 scope.go:117] "RemoveContainer" containerID="6ec31906b6371701d0b5710d4b1315dc06c057b1e7780fd686467e2760884358" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.177011 4980 scope.go:117] "RemoveContainer" containerID="af7abc49fce6bc75f433cfc39e08af97c0daf1b4218b68e1f0598d0c63b32268" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.196711 4980 scope.go:117] "RemoveContainer" containerID="f22fc6bd6deb91d1879074fd24b3e42a7e5ace9beefcc3a317f88c471bb6d597" Jan 07 03:52:19 crc kubenswrapper[4980]: E0107 03:52:19.197190 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f22fc6bd6deb91d1879074fd24b3e42a7e5ace9beefcc3a317f88c471bb6d597\": container with ID starting with f22fc6bd6deb91d1879074fd24b3e42a7e5ace9beefcc3a317f88c471bb6d597 not found: ID does not exist" containerID="f22fc6bd6deb91d1879074fd24b3e42a7e5ace9beefcc3a317f88c471bb6d597" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.197230 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f22fc6bd6deb91d1879074fd24b3e42a7e5ace9beefcc3a317f88c471bb6d597"} err="failed to get container status \"f22fc6bd6deb91d1879074fd24b3e42a7e5ace9beefcc3a317f88c471bb6d597\": rpc error: code = NotFound desc = could not find container \"f22fc6bd6deb91d1879074fd24b3e42a7e5ace9beefcc3a317f88c471bb6d597\": container with ID starting with f22fc6bd6deb91d1879074fd24b3e42a7e5ace9beefcc3a317f88c471bb6d597 not found: ID does not exist" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.197256 4980 scope.go:117] "RemoveContainer" containerID="6deda0d72313375fb87a6906c71526f0c7905ab1cb8c075bd0545fcdb5dcff00" Jan 07 03:52:19 crc kubenswrapper[4980]: E0107 03:52:19.197720 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6deda0d72313375fb87a6906c71526f0c7905ab1cb8c075bd0545fcdb5dcff00\": container with ID starting with 6deda0d72313375fb87a6906c71526f0c7905ab1cb8c075bd0545fcdb5dcff00 not found: ID does not exist" containerID="6deda0d72313375fb87a6906c71526f0c7905ab1cb8c075bd0545fcdb5dcff00" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.197748 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6deda0d72313375fb87a6906c71526f0c7905ab1cb8c075bd0545fcdb5dcff00"} err="failed to get container status \"6deda0d72313375fb87a6906c71526f0c7905ab1cb8c075bd0545fcdb5dcff00\": rpc error: code = NotFound desc = could not find container \"6deda0d72313375fb87a6906c71526f0c7905ab1cb8c075bd0545fcdb5dcff00\": container with ID starting with 6deda0d72313375fb87a6906c71526f0c7905ab1cb8c075bd0545fcdb5dcff00 not found: ID does not exist" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.197766 4980 scope.go:117] "RemoveContainer" containerID="6ec31906b6371701d0b5710d4b1315dc06c057b1e7780fd686467e2760884358" Jan 07 03:52:19 crc kubenswrapper[4980]: E0107 03:52:19.197993 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ec31906b6371701d0b5710d4b1315dc06c057b1e7780fd686467e2760884358\": container with ID starting with 6ec31906b6371701d0b5710d4b1315dc06c057b1e7780fd686467e2760884358 not found: ID does not exist" containerID="6ec31906b6371701d0b5710d4b1315dc06c057b1e7780fd686467e2760884358" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.198023 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ec31906b6371701d0b5710d4b1315dc06c057b1e7780fd686467e2760884358"} err="failed to get container status \"6ec31906b6371701d0b5710d4b1315dc06c057b1e7780fd686467e2760884358\": rpc error: code = NotFound desc = could not find container \"6ec31906b6371701d0b5710d4b1315dc06c057b1e7780fd686467e2760884358\": container with ID starting with 6ec31906b6371701d0b5710d4b1315dc06c057b1e7780fd686467e2760884358 not found: ID does not exist" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.198042 4980 scope.go:117] "RemoveContainer" containerID="af7abc49fce6bc75f433cfc39e08af97c0daf1b4218b68e1f0598d0c63b32268" Jan 07 03:52:19 crc kubenswrapper[4980]: E0107 03:52:19.198474 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af7abc49fce6bc75f433cfc39e08af97c0daf1b4218b68e1f0598d0c63b32268\": container with ID starting with af7abc49fce6bc75f433cfc39e08af97c0daf1b4218b68e1f0598d0c63b32268 not found: ID does not exist" containerID="af7abc49fce6bc75f433cfc39e08af97c0daf1b4218b68e1f0598d0c63b32268" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.198502 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af7abc49fce6bc75f433cfc39e08af97c0daf1b4218b68e1f0598d0c63b32268"} err="failed to get container status \"af7abc49fce6bc75f433cfc39e08af97c0daf1b4218b68e1f0598d0c63b32268\": rpc error: code = NotFound desc = could not find container \"af7abc49fce6bc75f433cfc39e08af97c0daf1b4218b68e1f0598d0c63b32268\": container with ID starting with af7abc49fce6bc75f433cfc39e08af97c0daf1b4218b68e1f0598d0c63b32268 not found: ID does not exist" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.221252 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c78e159b-369b-4085-8d58-71e513b0db32-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.421761 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.430878 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.439364 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:52:19 crc kubenswrapper[4980]: E0107 03:52:19.439724 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c78e159b-369b-4085-8d58-71e513b0db32" containerName="ceilometer-central-agent" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.439740 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c78e159b-369b-4085-8d58-71e513b0db32" containerName="ceilometer-central-agent" Jan 07 03:52:19 crc kubenswrapper[4980]: E0107 03:52:19.439771 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c78e159b-369b-4085-8d58-71e513b0db32" containerName="ceilometer-notification-agent" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.439778 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c78e159b-369b-4085-8d58-71e513b0db32" containerName="ceilometer-notification-agent" Jan 07 03:52:19 crc kubenswrapper[4980]: E0107 03:52:19.439797 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c78e159b-369b-4085-8d58-71e513b0db32" containerName="proxy-httpd" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.439804 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c78e159b-369b-4085-8d58-71e513b0db32" containerName="proxy-httpd" Jan 07 03:52:19 crc kubenswrapper[4980]: E0107 03:52:19.439815 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c78e159b-369b-4085-8d58-71e513b0db32" containerName="sg-core" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.439820 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c78e159b-369b-4085-8d58-71e513b0db32" containerName="sg-core" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.439972 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c78e159b-369b-4085-8d58-71e513b0db32" containerName="ceilometer-notification-agent" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.439992 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c78e159b-369b-4085-8d58-71e513b0db32" containerName="sg-core" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.440000 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c78e159b-369b-4085-8d58-71e513b0db32" containerName="ceilometer-central-agent" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.440011 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c78e159b-369b-4085-8d58-71e513b0db32" containerName="proxy-httpd" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.441482 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.446794 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.447077 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.465362 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.527703 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-config-data\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.527776 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e1eb159-024c-41d9-bf18-ff49665ea348-log-httpd\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.528076 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.528215 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbhbr\" (UniqueName: \"kubernetes.io/projected/8e1eb159-024c-41d9-bf18-ff49665ea348-kube-api-access-fbhbr\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.528270 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e1eb159-024c-41d9-bf18-ff49665ea348-run-httpd\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.528299 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.528348 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-scripts\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.630013 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbhbr\" (UniqueName: \"kubernetes.io/projected/8e1eb159-024c-41d9-bf18-ff49665ea348-kube-api-access-fbhbr\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.630110 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e1eb159-024c-41d9-bf18-ff49665ea348-run-httpd\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.630150 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.630195 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-scripts\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.630245 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-config-data\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.630313 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e1eb159-024c-41d9-bf18-ff49665ea348-log-httpd\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.630457 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.630828 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e1eb159-024c-41d9-bf18-ff49665ea348-run-httpd\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.630883 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e1eb159-024c-41d9-bf18-ff49665ea348-log-httpd\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.635512 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.635825 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.636382 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-config-data\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.658380 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbhbr\" (UniqueName: \"kubernetes.io/projected/8e1eb159-024c-41d9-bf18-ff49665ea348-kube-api-access-fbhbr\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.658407 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-scripts\") pod \"ceilometer-0\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " pod="openstack/ceilometer-0" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.748036 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c78e159b-369b-4085-8d58-71e513b0db32" path="/var/lib/kubelet/pods/c78e159b-369b-4085-8d58-71e513b0db32/volumes" Jan 07 03:52:19 crc kubenswrapper[4980]: I0107 03:52:19.764395 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:52:20 crc kubenswrapper[4980]: I0107 03:52:20.214833 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:52:21 crc kubenswrapper[4980]: I0107 03:52:21.363850 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 07 03:52:21 crc kubenswrapper[4980]: I0107 03:52:21.363922 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 07 03:52:21 crc kubenswrapper[4980]: I0107 03:52:21.410652 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 07 03:52:21 crc kubenswrapper[4980]: I0107 03:52:21.425789 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 07 03:52:22 crc kubenswrapper[4980]: I0107 03:52:22.111319 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 07 03:52:22 crc kubenswrapper[4980]: I0107 03:52:22.111388 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 07 03:52:22 crc kubenswrapper[4980]: I0107 03:52:22.351781 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 07 03:52:22 crc kubenswrapper[4980]: I0107 03:52:22.351834 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 07 03:52:22 crc kubenswrapper[4980]: I0107 03:52:22.392996 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 07 03:52:22 crc kubenswrapper[4980]: I0107 03:52:22.398490 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 07 03:52:23 crc kubenswrapper[4980]: I0107 03:52:23.118414 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 07 03:52:23 crc kubenswrapper[4980]: I0107 03:52:23.118729 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 07 03:52:23 crc kubenswrapper[4980]: E0107 03:52:23.858212 4980 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d74f1e6_2229_4ff7_8c80_b12d09285da4.slice/crio-980c543c3acef5c2165ccf6a21093814295d80a7c1dcb9e72159c8e4bea6b851\": RecentStats: unable to find data in memory cache]" Jan 07 03:52:24 crc kubenswrapper[4980]: I0107 03:52:24.154227 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 07 03:52:24 crc kubenswrapper[4980]: I0107 03:52:24.154338 4980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 07 03:52:24 crc kubenswrapper[4980]: I0107 03:52:24.304734 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 07 03:52:25 crc kubenswrapper[4980]: I0107 03:52:25.146619 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lfwjd" event={"ID":"c6dbf45b-58e2-4fa6-a127-d604586a3b44","Type":"ContainerStarted","Data":"78f30ae585c932568adc81c9cf47658a33fb996895684575c0b7fe38fb5a0e4c"} Jan 07 03:52:25 crc kubenswrapper[4980]: I0107 03:52:25.153957 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e1eb159-024c-41d9-bf18-ff49665ea348","Type":"ContainerStarted","Data":"c8b1676d44dd0da6f85cee121dd5b135a1b4a8cd05bdb42a2f7f7fba494d1a93"} Jan 07 03:52:25 crc kubenswrapper[4980]: I0107 03:52:25.175453 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-lfwjd" podStartSLOduration=1.650016275 podStartE2EDuration="9.175429764s" podCreationTimestamp="2026-01-07 03:52:16 +0000 UTC" firstStartedPulling="2026-01-07 03:52:17.185058718 +0000 UTC m=+1183.750753453" lastFinishedPulling="2026-01-07 03:52:24.710472207 +0000 UTC m=+1191.276166942" observedRunningTime="2026-01-07 03:52:25.170399928 +0000 UTC m=+1191.736094683" watchObservedRunningTime="2026-01-07 03:52:25.175429764 +0000 UTC m=+1191.741124509" Jan 07 03:52:25 crc kubenswrapper[4980]: I0107 03:52:25.273498 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 07 03:52:25 crc kubenswrapper[4980]: I0107 03:52:25.273638 4980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 07 03:52:25 crc kubenswrapper[4980]: I0107 03:52:25.279318 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 07 03:52:26 crc kubenswrapper[4980]: I0107 03:52:26.165898 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e1eb159-024c-41d9-bf18-ff49665ea348","Type":"ContainerStarted","Data":"9472e9263e2480fe2b991a573ecf2734fd7e01125fc1fa16de499e5f2d59a1f4"} Jan 07 03:52:26 crc kubenswrapper[4980]: I0107 03:52:26.166296 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e1eb159-024c-41d9-bf18-ff49665ea348","Type":"ContainerStarted","Data":"5b48fe2d0e7677241d3fce658dc31a583fe754179c284735ab508373aa17d1a0"} Jan 07 03:52:27 crc kubenswrapper[4980]: I0107 03:52:27.177790 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e1eb159-024c-41d9-bf18-ff49665ea348","Type":"ContainerStarted","Data":"2cb95ba4a6bdb5cc0b27c9523c21929b6720e465c2ad1df7bf3e29e609699509"} Jan 07 03:52:28 crc kubenswrapper[4980]: I0107 03:52:28.195146 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e1eb159-024c-41d9-bf18-ff49665ea348","Type":"ContainerStarted","Data":"e388f383e06e3d157a5aeebbd9c48f9d47f8da65d96acf5336563a105ed5cd24"} Jan 07 03:52:28 crc kubenswrapper[4980]: I0107 03:52:28.195772 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 07 03:52:28 crc kubenswrapper[4980]: I0107 03:52:28.229566 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.974794044 podStartE2EDuration="9.229528444s" podCreationTimestamp="2026-01-07 03:52:19 +0000 UTC" firstStartedPulling="2026-01-07 03:52:24.643993463 +0000 UTC m=+1191.209688198" lastFinishedPulling="2026-01-07 03:52:27.898727853 +0000 UTC m=+1194.464422598" observedRunningTime="2026-01-07 03:52:28.226047876 +0000 UTC m=+1194.791742621" watchObservedRunningTime="2026-01-07 03:52:28.229528444 +0000 UTC m=+1194.795223189" Jan 07 03:52:31 crc kubenswrapper[4980]: I0107 03:52:31.165253 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:52:31 crc kubenswrapper[4980]: I0107 03:52:31.166640 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerName="ceilometer-central-agent" containerID="cri-o://5b48fe2d0e7677241d3fce658dc31a583fe754179c284735ab508373aa17d1a0" gracePeriod=30 Jan 07 03:52:31 crc kubenswrapper[4980]: I0107 03:52:31.166750 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerName="sg-core" containerID="cri-o://2cb95ba4a6bdb5cc0b27c9523c21929b6720e465c2ad1df7bf3e29e609699509" gracePeriod=30 Jan 07 03:52:31 crc kubenswrapper[4980]: I0107 03:52:31.166750 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerName="ceilometer-notification-agent" containerID="cri-o://9472e9263e2480fe2b991a573ecf2734fd7e01125fc1fa16de499e5f2d59a1f4" gracePeriod=30 Jan 07 03:52:31 crc kubenswrapper[4980]: I0107 03:52:31.166811 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerName="proxy-httpd" containerID="cri-o://e388f383e06e3d157a5aeebbd9c48f9d47f8da65d96acf5336563a105ed5cd24" gracePeriod=30 Jan 07 03:52:32 crc kubenswrapper[4980]: I0107 03:52:32.239126 4980 generic.go:334] "Generic (PLEG): container finished" podID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerID="e388f383e06e3d157a5aeebbd9c48f9d47f8da65d96acf5336563a105ed5cd24" exitCode=0 Jan 07 03:52:32 crc kubenswrapper[4980]: I0107 03:52:32.239512 4980 generic.go:334] "Generic (PLEG): container finished" podID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerID="2cb95ba4a6bdb5cc0b27c9523c21929b6720e465c2ad1df7bf3e29e609699509" exitCode=2 Jan 07 03:52:32 crc kubenswrapper[4980]: I0107 03:52:32.239524 4980 generic.go:334] "Generic (PLEG): container finished" podID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerID="9472e9263e2480fe2b991a573ecf2734fd7e01125fc1fa16de499e5f2d59a1f4" exitCode=0 Jan 07 03:52:32 crc kubenswrapper[4980]: I0107 03:52:32.239425 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e1eb159-024c-41d9-bf18-ff49665ea348","Type":"ContainerDied","Data":"e388f383e06e3d157a5aeebbd9c48f9d47f8da65d96acf5336563a105ed5cd24"} Jan 07 03:52:32 crc kubenswrapper[4980]: I0107 03:52:32.239584 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e1eb159-024c-41d9-bf18-ff49665ea348","Type":"ContainerDied","Data":"2cb95ba4a6bdb5cc0b27c9523c21929b6720e465c2ad1df7bf3e29e609699509"} Jan 07 03:52:32 crc kubenswrapper[4980]: I0107 03:52:32.239600 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e1eb159-024c-41d9-bf18-ff49665ea348","Type":"ContainerDied","Data":"9472e9263e2480fe2b991a573ecf2734fd7e01125fc1fa16de499e5f2d59a1f4"} Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.220913 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.268875 4980 generic.go:334] "Generic (PLEG): container finished" podID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerID="5b48fe2d0e7677241d3fce658dc31a583fe754179c284735ab508373aa17d1a0" exitCode=0 Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.268947 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e1eb159-024c-41d9-bf18-ff49665ea348","Type":"ContainerDied","Data":"5b48fe2d0e7677241d3fce658dc31a583fe754179c284735ab508373aa17d1a0"} Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.268967 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.269001 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e1eb159-024c-41d9-bf18-ff49665ea348","Type":"ContainerDied","Data":"c8b1676d44dd0da6f85cee121dd5b135a1b4a8cd05bdb42a2f7f7fba494d1a93"} Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.269034 4980 scope.go:117] "RemoveContainer" containerID="e388f383e06e3d157a5aeebbd9c48f9d47f8da65d96acf5336563a105ed5cd24" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.315084 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbhbr\" (UniqueName: \"kubernetes.io/projected/8e1eb159-024c-41d9-bf18-ff49665ea348-kube-api-access-fbhbr\") pod \"8e1eb159-024c-41d9-bf18-ff49665ea348\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.315456 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-scripts\") pod \"8e1eb159-024c-41d9-bf18-ff49665ea348\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.315578 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-combined-ca-bundle\") pod \"8e1eb159-024c-41d9-bf18-ff49665ea348\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.315781 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-config-data\") pod \"8e1eb159-024c-41d9-bf18-ff49665ea348\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.315885 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-sg-core-conf-yaml\") pod \"8e1eb159-024c-41d9-bf18-ff49665ea348\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.315973 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e1eb159-024c-41d9-bf18-ff49665ea348-log-httpd\") pod \"8e1eb159-024c-41d9-bf18-ff49665ea348\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.316063 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e1eb159-024c-41d9-bf18-ff49665ea348-run-httpd\") pod \"8e1eb159-024c-41d9-bf18-ff49665ea348\" (UID: \"8e1eb159-024c-41d9-bf18-ff49665ea348\") " Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.316870 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e1eb159-024c-41d9-bf18-ff49665ea348-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8e1eb159-024c-41d9-bf18-ff49665ea348" (UID: "8e1eb159-024c-41d9-bf18-ff49665ea348"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.323727 4980 scope.go:117] "RemoveContainer" containerID="2cb95ba4a6bdb5cc0b27c9523c21929b6720e465c2ad1df7bf3e29e609699509" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.324264 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e1eb159-024c-41d9-bf18-ff49665ea348-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8e1eb159-024c-41d9-bf18-ff49665ea348" (UID: "8e1eb159-024c-41d9-bf18-ff49665ea348"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.333834 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e1eb159-024c-41d9-bf18-ff49665ea348-kube-api-access-fbhbr" (OuterVolumeSpecName: "kube-api-access-fbhbr") pod "8e1eb159-024c-41d9-bf18-ff49665ea348" (UID: "8e1eb159-024c-41d9-bf18-ff49665ea348"). InnerVolumeSpecName "kube-api-access-fbhbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.335734 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-scripts" (OuterVolumeSpecName: "scripts") pod "8e1eb159-024c-41d9-bf18-ff49665ea348" (UID: "8e1eb159-024c-41d9-bf18-ff49665ea348"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.420788 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbhbr\" (UniqueName: \"kubernetes.io/projected/8e1eb159-024c-41d9-bf18-ff49665ea348-kube-api-access-fbhbr\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.420835 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.420844 4980 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e1eb159-024c-41d9-bf18-ff49665ea348-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.420853 4980 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e1eb159-024c-41d9-bf18-ff49665ea348-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.428876 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8e1eb159-024c-41d9-bf18-ff49665ea348" (UID: "8e1eb159-024c-41d9-bf18-ff49665ea348"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.467855 4980 scope.go:117] "RemoveContainer" containerID="9472e9263e2480fe2b991a573ecf2734fd7e01125fc1fa16de499e5f2d59a1f4" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.478252 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e1eb159-024c-41d9-bf18-ff49665ea348" (UID: "8e1eb159-024c-41d9-bf18-ff49665ea348"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.480820 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-config-data" (OuterVolumeSpecName: "config-data") pod "8e1eb159-024c-41d9-bf18-ff49665ea348" (UID: "8e1eb159-024c-41d9-bf18-ff49665ea348"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.489195 4980 scope.go:117] "RemoveContainer" containerID="5b48fe2d0e7677241d3fce658dc31a583fe754179c284735ab508373aa17d1a0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.522747 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.522780 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.522789 4980 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e1eb159-024c-41d9-bf18-ff49665ea348-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.527667 4980 scope.go:117] "RemoveContainer" containerID="e388f383e06e3d157a5aeebbd9c48f9d47f8da65d96acf5336563a105ed5cd24" Jan 07 03:52:34 crc kubenswrapper[4980]: E0107 03:52:34.527957 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e388f383e06e3d157a5aeebbd9c48f9d47f8da65d96acf5336563a105ed5cd24\": container with ID starting with e388f383e06e3d157a5aeebbd9c48f9d47f8da65d96acf5336563a105ed5cd24 not found: ID does not exist" containerID="e388f383e06e3d157a5aeebbd9c48f9d47f8da65d96acf5336563a105ed5cd24" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.528003 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e388f383e06e3d157a5aeebbd9c48f9d47f8da65d96acf5336563a105ed5cd24"} err="failed to get container status \"e388f383e06e3d157a5aeebbd9c48f9d47f8da65d96acf5336563a105ed5cd24\": rpc error: code = NotFound desc = could not find container \"e388f383e06e3d157a5aeebbd9c48f9d47f8da65d96acf5336563a105ed5cd24\": container with ID starting with e388f383e06e3d157a5aeebbd9c48f9d47f8da65d96acf5336563a105ed5cd24 not found: ID does not exist" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.528033 4980 scope.go:117] "RemoveContainer" containerID="2cb95ba4a6bdb5cc0b27c9523c21929b6720e465c2ad1df7bf3e29e609699509" Jan 07 03:52:34 crc kubenswrapper[4980]: E0107 03:52:34.528272 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cb95ba4a6bdb5cc0b27c9523c21929b6720e465c2ad1df7bf3e29e609699509\": container with ID starting with 2cb95ba4a6bdb5cc0b27c9523c21929b6720e465c2ad1df7bf3e29e609699509 not found: ID does not exist" containerID="2cb95ba4a6bdb5cc0b27c9523c21929b6720e465c2ad1df7bf3e29e609699509" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.528439 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cb95ba4a6bdb5cc0b27c9523c21929b6720e465c2ad1df7bf3e29e609699509"} err="failed to get container status \"2cb95ba4a6bdb5cc0b27c9523c21929b6720e465c2ad1df7bf3e29e609699509\": rpc error: code = NotFound desc = could not find container \"2cb95ba4a6bdb5cc0b27c9523c21929b6720e465c2ad1df7bf3e29e609699509\": container with ID starting with 2cb95ba4a6bdb5cc0b27c9523c21929b6720e465c2ad1df7bf3e29e609699509 not found: ID does not exist" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.528522 4980 scope.go:117] "RemoveContainer" containerID="9472e9263e2480fe2b991a573ecf2734fd7e01125fc1fa16de499e5f2d59a1f4" Jan 07 03:52:34 crc kubenswrapper[4980]: E0107 03:52:34.528970 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9472e9263e2480fe2b991a573ecf2734fd7e01125fc1fa16de499e5f2d59a1f4\": container with ID starting with 9472e9263e2480fe2b991a573ecf2734fd7e01125fc1fa16de499e5f2d59a1f4 not found: ID does not exist" containerID="9472e9263e2480fe2b991a573ecf2734fd7e01125fc1fa16de499e5f2d59a1f4" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.528998 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9472e9263e2480fe2b991a573ecf2734fd7e01125fc1fa16de499e5f2d59a1f4"} err="failed to get container status \"9472e9263e2480fe2b991a573ecf2734fd7e01125fc1fa16de499e5f2d59a1f4\": rpc error: code = NotFound desc = could not find container \"9472e9263e2480fe2b991a573ecf2734fd7e01125fc1fa16de499e5f2d59a1f4\": container with ID starting with 9472e9263e2480fe2b991a573ecf2734fd7e01125fc1fa16de499e5f2d59a1f4 not found: ID does not exist" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.529019 4980 scope.go:117] "RemoveContainer" containerID="5b48fe2d0e7677241d3fce658dc31a583fe754179c284735ab508373aa17d1a0" Jan 07 03:52:34 crc kubenswrapper[4980]: E0107 03:52:34.529237 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b48fe2d0e7677241d3fce658dc31a583fe754179c284735ab508373aa17d1a0\": container with ID starting with 5b48fe2d0e7677241d3fce658dc31a583fe754179c284735ab508373aa17d1a0 not found: ID does not exist" containerID="5b48fe2d0e7677241d3fce658dc31a583fe754179c284735ab508373aa17d1a0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.529257 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b48fe2d0e7677241d3fce658dc31a583fe754179c284735ab508373aa17d1a0"} err="failed to get container status \"5b48fe2d0e7677241d3fce658dc31a583fe754179c284735ab508373aa17d1a0\": rpc error: code = NotFound desc = could not find container \"5b48fe2d0e7677241d3fce658dc31a583fe754179c284735ab508373aa17d1a0\": container with ID starting with 5b48fe2d0e7677241d3fce658dc31a583fe754179c284735ab508373aa17d1a0 not found: ID does not exist" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.601484 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.611181 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.622608 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:52:34 crc kubenswrapper[4980]: E0107 03:52:34.622974 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerName="sg-core" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.622988 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerName="sg-core" Jan 07 03:52:34 crc kubenswrapper[4980]: E0107 03:52:34.623006 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerName="proxy-httpd" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.623011 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerName="proxy-httpd" Jan 07 03:52:34 crc kubenswrapper[4980]: E0107 03:52:34.623022 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerName="ceilometer-notification-agent" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.623030 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerName="ceilometer-notification-agent" Jan 07 03:52:34 crc kubenswrapper[4980]: E0107 03:52:34.623048 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerName="ceilometer-central-agent" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.623055 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerName="ceilometer-central-agent" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.623225 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerName="sg-core" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.623239 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerName="ceilometer-notification-agent" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.623252 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerName="proxy-httpd" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.623264 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e1eb159-024c-41d9-bf18-ff49665ea348" containerName="ceilometer-central-agent" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.624774 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.627026 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.627067 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.639770 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.727288 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66329f69-013c-4533-9ca3-1d4a9fe9073c-run-httpd\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.727400 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66329f69-013c-4533-9ca3-1d4a9fe9073c-log-httpd\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.727470 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.727523 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-scripts\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.727545 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-config-data\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.727593 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8546z\" (UniqueName: \"kubernetes.io/projected/66329f69-013c-4533-9ca3-1d4a9fe9073c-kube-api-access-8546z\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.727623 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.829723 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66329f69-013c-4533-9ca3-1d4a9fe9073c-run-httpd\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.829882 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66329f69-013c-4533-9ca3-1d4a9fe9073c-log-httpd\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.829943 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.829994 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-config-data\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.830013 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-scripts\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.830044 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8546z\" (UniqueName: \"kubernetes.io/projected/66329f69-013c-4533-9ca3-1d4a9fe9073c-kube-api-access-8546z\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.830068 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.830380 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66329f69-013c-4533-9ca3-1d4a9fe9073c-run-httpd\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.830682 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66329f69-013c-4533-9ca3-1d4a9fe9073c-log-httpd\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.838330 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.838324 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.838523 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-scripts\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.839886 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-config-data\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.849299 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8546z\" (UniqueName: \"kubernetes.io/projected/66329f69-013c-4533-9ca3-1d4a9fe9073c-kube-api-access-8546z\") pod \"ceilometer-0\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " pod="openstack/ceilometer-0" Jan 07 03:52:34 crc kubenswrapper[4980]: I0107 03:52:34.988023 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:52:35 crc kubenswrapper[4980]: I0107 03:52:35.517529 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:52:35 crc kubenswrapper[4980]: I0107 03:52:35.749249 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e1eb159-024c-41d9-bf18-ff49665ea348" path="/var/lib/kubelet/pods/8e1eb159-024c-41d9-bf18-ff49665ea348/volumes" Jan 07 03:52:36 crc kubenswrapper[4980]: I0107 03:52:36.299413 4980 generic.go:334] "Generic (PLEG): container finished" podID="c6dbf45b-58e2-4fa6-a127-d604586a3b44" containerID="78f30ae585c932568adc81c9cf47658a33fb996895684575c0b7fe38fb5a0e4c" exitCode=0 Jan 07 03:52:36 crc kubenswrapper[4980]: I0107 03:52:36.299644 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lfwjd" event={"ID":"c6dbf45b-58e2-4fa6-a127-d604586a3b44","Type":"ContainerDied","Data":"78f30ae585c932568adc81c9cf47658a33fb996895684575c0b7fe38fb5a0e4c"} Jan 07 03:52:36 crc kubenswrapper[4980]: I0107 03:52:36.301232 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66329f69-013c-4533-9ca3-1d4a9fe9073c","Type":"ContainerStarted","Data":"2e2e735e80fd44eafda1e25371aacc8c9712fd4f8fdfac98c0e763c349bea90f"} Jan 07 03:52:37 crc kubenswrapper[4980]: I0107 03:52:37.312047 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66329f69-013c-4533-9ca3-1d4a9fe9073c","Type":"ContainerStarted","Data":"3ed8266c2272f3bb7b2810804caf80d22867fbe9efa4d9d0beaedc1151883b5d"} Jan 07 03:52:37 crc kubenswrapper[4980]: I0107 03:52:37.680929 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lfwjd" Jan 07 03:52:37 crc kubenswrapper[4980]: I0107 03:52:37.797124 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99rj9\" (UniqueName: \"kubernetes.io/projected/c6dbf45b-58e2-4fa6-a127-d604586a3b44-kube-api-access-99rj9\") pod \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\" (UID: \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\") " Jan 07 03:52:37 crc kubenswrapper[4980]: I0107 03:52:37.797195 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6dbf45b-58e2-4fa6-a127-d604586a3b44-combined-ca-bundle\") pod \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\" (UID: \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\") " Jan 07 03:52:37 crc kubenswrapper[4980]: I0107 03:52:37.797286 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6dbf45b-58e2-4fa6-a127-d604586a3b44-config-data\") pod \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\" (UID: \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\") " Jan 07 03:52:37 crc kubenswrapper[4980]: I0107 03:52:37.797375 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6dbf45b-58e2-4fa6-a127-d604586a3b44-scripts\") pod \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\" (UID: \"c6dbf45b-58e2-4fa6-a127-d604586a3b44\") " Jan 07 03:52:37 crc kubenswrapper[4980]: I0107 03:52:37.802724 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6dbf45b-58e2-4fa6-a127-d604586a3b44-kube-api-access-99rj9" (OuterVolumeSpecName: "kube-api-access-99rj9") pod "c6dbf45b-58e2-4fa6-a127-d604586a3b44" (UID: "c6dbf45b-58e2-4fa6-a127-d604586a3b44"). InnerVolumeSpecName "kube-api-access-99rj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:52:37 crc kubenswrapper[4980]: I0107 03:52:37.803114 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6dbf45b-58e2-4fa6-a127-d604586a3b44-scripts" (OuterVolumeSpecName: "scripts") pod "c6dbf45b-58e2-4fa6-a127-d604586a3b44" (UID: "c6dbf45b-58e2-4fa6-a127-d604586a3b44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:37 crc kubenswrapper[4980]: I0107 03:52:37.849697 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6dbf45b-58e2-4fa6-a127-d604586a3b44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6dbf45b-58e2-4fa6-a127-d604586a3b44" (UID: "c6dbf45b-58e2-4fa6-a127-d604586a3b44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:37 crc kubenswrapper[4980]: I0107 03:52:37.852078 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6dbf45b-58e2-4fa6-a127-d604586a3b44-config-data" (OuterVolumeSpecName: "config-data") pod "c6dbf45b-58e2-4fa6-a127-d604586a3b44" (UID: "c6dbf45b-58e2-4fa6-a127-d604586a3b44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:37 crc kubenswrapper[4980]: I0107 03:52:37.899853 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99rj9\" (UniqueName: \"kubernetes.io/projected/c6dbf45b-58e2-4fa6-a127-d604586a3b44-kube-api-access-99rj9\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:37 crc kubenswrapper[4980]: I0107 03:52:37.900091 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6dbf45b-58e2-4fa6-a127-d604586a3b44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:37 crc kubenswrapper[4980]: I0107 03:52:37.900183 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6dbf45b-58e2-4fa6-a127-d604586a3b44-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:37 crc kubenswrapper[4980]: I0107 03:52:37.900263 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6dbf45b-58e2-4fa6-a127-d604586a3b44-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.375837 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66329f69-013c-4533-9ca3-1d4a9fe9073c","Type":"ContainerStarted","Data":"46bf3ccf201c786e81ef65b74b15f50eabd1becb918424bd98329974a087f9ce"} Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.376248 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66329f69-013c-4533-9ca3-1d4a9fe9073c","Type":"ContainerStarted","Data":"d59c2e0f541ef6199bf8b88f34fc752de1c5eb9c041461be70ff1be5cb276666"} Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.378325 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lfwjd" event={"ID":"c6dbf45b-58e2-4fa6-a127-d604586a3b44","Type":"ContainerDied","Data":"25d444c68de30c0ed3d3dbe9fb615adaef7f4af3038dab936aebf7dd971e0812"} Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.378434 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25d444c68de30c0ed3d3dbe9fb615adaef7f4af3038dab936aebf7dd971e0812" Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.378572 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lfwjd" Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.447268 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 07 03:52:38 crc kubenswrapper[4980]: E0107 03:52:38.447705 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6dbf45b-58e2-4fa6-a127-d604586a3b44" containerName="nova-cell0-conductor-db-sync" Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.447723 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6dbf45b-58e2-4fa6-a127-d604586a3b44" containerName="nova-cell0-conductor-db-sync" Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.447881 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6dbf45b-58e2-4fa6-a127-d604586a3b44" containerName="nova-cell0-conductor-db-sync" Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.448484 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.451517 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tvmgq" Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.453261 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.459379 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.615026 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cr6n\" (UniqueName: \"kubernetes.io/projected/86f2272b-45b2-490c-a64e-f4367491036b-kube-api-access-7cr6n\") pod \"nova-cell0-conductor-0\" (UID: \"86f2272b-45b2-490c-a64e-f4367491036b\") " pod="openstack/nova-cell0-conductor-0" Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.615106 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f2272b-45b2-490c-a64e-f4367491036b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"86f2272b-45b2-490c-a64e-f4367491036b\") " pod="openstack/nova-cell0-conductor-0" Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.615158 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f2272b-45b2-490c-a64e-f4367491036b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"86f2272b-45b2-490c-a64e-f4367491036b\") " pod="openstack/nova-cell0-conductor-0" Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.717094 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cr6n\" (UniqueName: \"kubernetes.io/projected/86f2272b-45b2-490c-a64e-f4367491036b-kube-api-access-7cr6n\") pod \"nova-cell0-conductor-0\" (UID: \"86f2272b-45b2-490c-a64e-f4367491036b\") " pod="openstack/nova-cell0-conductor-0" Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.717197 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f2272b-45b2-490c-a64e-f4367491036b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"86f2272b-45b2-490c-a64e-f4367491036b\") " pod="openstack/nova-cell0-conductor-0" Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.717252 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f2272b-45b2-490c-a64e-f4367491036b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"86f2272b-45b2-490c-a64e-f4367491036b\") " pod="openstack/nova-cell0-conductor-0" Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.724096 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f2272b-45b2-490c-a64e-f4367491036b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"86f2272b-45b2-490c-a64e-f4367491036b\") " pod="openstack/nova-cell0-conductor-0" Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.737884 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f2272b-45b2-490c-a64e-f4367491036b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"86f2272b-45b2-490c-a64e-f4367491036b\") " pod="openstack/nova-cell0-conductor-0" Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.742943 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cr6n\" (UniqueName: \"kubernetes.io/projected/86f2272b-45b2-490c-a64e-f4367491036b-kube-api-access-7cr6n\") pod \"nova-cell0-conductor-0\" (UID: \"86f2272b-45b2-490c-a64e-f4367491036b\") " pod="openstack/nova-cell0-conductor-0" Jan 07 03:52:38 crc kubenswrapper[4980]: I0107 03:52:38.814632 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 07 03:52:39 crc kubenswrapper[4980]: I0107 03:52:39.301045 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 07 03:52:39 crc kubenswrapper[4980]: W0107 03:52:39.313601 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86f2272b_45b2_490c_a64e_f4367491036b.slice/crio-e3d5a3458f392c81b8bffe5a0686d9b0f0381235ee0ba6c965ff4bbfbeb12c3c WatchSource:0}: Error finding container e3d5a3458f392c81b8bffe5a0686d9b0f0381235ee0ba6c965ff4bbfbeb12c3c: Status 404 returned error can't find the container with id e3d5a3458f392c81b8bffe5a0686d9b0f0381235ee0ba6c965ff4bbfbeb12c3c Jan 07 03:52:39 crc kubenswrapper[4980]: I0107 03:52:39.397103 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66329f69-013c-4533-9ca3-1d4a9fe9073c","Type":"ContainerStarted","Data":"e4629353bf5f5fa1562af2db98c989b1e848336d906925ffcd0e6a1d5b5ec7ae"} Jan 07 03:52:39 crc kubenswrapper[4980]: I0107 03:52:39.397419 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 07 03:52:39 crc kubenswrapper[4980]: I0107 03:52:39.398244 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"86f2272b-45b2-490c-a64e-f4367491036b","Type":"ContainerStarted","Data":"e3d5a3458f392c81b8bffe5a0686d9b0f0381235ee0ba6c965ff4bbfbeb12c3c"} Jan 07 03:52:39 crc kubenswrapper[4980]: I0107 03:52:39.432086 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.900491534 podStartE2EDuration="5.432052722s" podCreationTimestamp="2026-01-07 03:52:34 +0000 UTC" firstStartedPulling="2026-01-07 03:52:35.527713773 +0000 UTC m=+1202.093408518" lastFinishedPulling="2026-01-07 03:52:39.059274961 +0000 UTC m=+1205.624969706" observedRunningTime="2026-01-07 03:52:39.413632567 +0000 UTC m=+1205.979327302" watchObservedRunningTime="2026-01-07 03:52:39.432052722 +0000 UTC m=+1205.997747497" Jan 07 03:52:40 crc kubenswrapper[4980]: I0107 03:52:40.411882 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"86f2272b-45b2-490c-a64e-f4367491036b","Type":"ContainerStarted","Data":"2e07dd6cea3e451c61d3aa10b20220b4934e67d2ff21566eeff48d63f1a1d528"} Jan 07 03:52:40 crc kubenswrapper[4980]: I0107 03:52:40.412211 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 07 03:52:40 crc kubenswrapper[4980]: I0107 03:52:40.442907 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.44287746 podStartE2EDuration="2.44287746s" podCreationTimestamp="2026-01-07 03:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:52:40.433641983 +0000 UTC m=+1206.999336728" watchObservedRunningTime="2026-01-07 03:52:40.44287746 +0000 UTC m=+1207.008572225" Jan 07 03:52:48 crc kubenswrapper[4980]: I0107 03:52:48.864253 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.414298 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-pm66v"] Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.416378 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pm66v" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.420939 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.425321 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.448787 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pm66v"] Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.559749 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pm66v\" (UID: \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\") " pod="openstack/nova-cell0-cell-mapping-pm66v" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.559839 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt8dk\" (UniqueName: \"kubernetes.io/projected/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-kube-api-access-pt8dk\") pod \"nova-cell0-cell-mapping-pm66v\" (UID: \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\") " pod="openstack/nova-cell0-cell-mapping-pm66v" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.560857 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-scripts\") pod \"nova-cell0-cell-mapping-pm66v\" (UID: \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\") " pod="openstack/nova-cell0-cell-mapping-pm66v" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.561019 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-config-data\") pod \"nova-cell0-cell-mapping-pm66v\" (UID: \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\") " pod="openstack/nova-cell0-cell-mapping-pm66v" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.639920 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.641279 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.644203 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.652911 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.663249 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pm66v\" (UID: \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\") " pod="openstack/nova-cell0-cell-mapping-pm66v" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.663326 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-logs\") pod \"nova-api-0\" (UID: \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\") " pod="openstack/nova-api-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.663367 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt8dk\" (UniqueName: \"kubernetes.io/projected/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-kube-api-access-pt8dk\") pod \"nova-cell0-cell-mapping-pm66v\" (UID: \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\") " pod="openstack/nova-cell0-cell-mapping-pm66v" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.663453 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45bdt\" (UniqueName: \"kubernetes.io/projected/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-kube-api-access-45bdt\") pod \"nova-api-0\" (UID: \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\") " pod="openstack/nova-api-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.663472 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-scripts\") pod \"nova-cell0-cell-mapping-pm66v\" (UID: \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\") " pod="openstack/nova-cell0-cell-mapping-pm66v" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.663499 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\") " pod="openstack/nova-api-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.663537 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-config-data\") pod \"nova-api-0\" (UID: \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\") " pod="openstack/nova-api-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.663573 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-config-data\") pod \"nova-cell0-cell-mapping-pm66v\" (UID: \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\") " pod="openstack/nova-cell0-cell-mapping-pm66v" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.745018 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.747268 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.753589 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.766587 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-logs\") pod \"nova-api-0\" (UID: \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\") " pod="openstack/nova-api-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.766819 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45bdt\" (UniqueName: \"kubernetes.io/projected/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-kube-api-access-45bdt\") pod \"nova-api-0\" (UID: \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\") " pod="openstack/nova-api-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.766889 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\") " pod="openstack/nova-api-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.766933 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-config-data\") pod \"nova-api-0\" (UID: \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\") " pod="openstack/nova-api-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.767444 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-logs\") pod \"nova-api-0\" (UID: \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\") " pod="openstack/nova-api-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.774805 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-config-data\") pod \"nova-api-0\" (UID: \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\") " pod="openstack/nova-api-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.777460 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-scripts\") pod \"nova-cell0-cell-mapping-pm66v\" (UID: \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\") " pod="openstack/nova-cell0-cell-mapping-pm66v" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.778027 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-config-data\") pod \"nova-cell0-cell-mapping-pm66v\" (UID: \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\") " pod="openstack/nova-cell0-cell-mapping-pm66v" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.787645 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\") " pod="openstack/nova-api-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.789148 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt8dk\" (UniqueName: \"kubernetes.io/projected/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-kube-api-access-pt8dk\") pod \"nova-cell0-cell-mapping-pm66v\" (UID: \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\") " pod="openstack/nova-cell0-cell-mapping-pm66v" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.795373 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.797328 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pm66v\" (UID: \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\") " pod="openstack/nova-cell0-cell-mapping-pm66v" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.827179 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45bdt\" (UniqueName: \"kubernetes.io/projected/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-kube-api-access-45bdt\") pod \"nova-api-0\" (UID: \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\") " pod="openstack/nova-api-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.868664 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.869789 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.870008 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d986dc-458e-4bc8-9a09-d2f90e3d888c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"55d986dc-458e-4bc8-9a09-d2f90e3d888c\") " pod="openstack/nova-scheduler-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.870097 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9mhh\" (UniqueName: \"kubernetes.io/projected/55d986dc-458e-4bc8-9a09-d2f90e3d888c-kube-api-access-j9mhh\") pod \"nova-scheduler-0\" (UID: \"55d986dc-458e-4bc8-9a09-d2f90e3d888c\") " pod="openstack/nova-scheduler-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.870180 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d986dc-458e-4bc8-9a09-d2f90e3d888c-config-data\") pod \"nova-scheduler-0\" (UID: \"55d986dc-458e-4bc8-9a09-d2f90e3d888c\") " pod="openstack/nova-scheduler-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.877341 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.877562 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.890435 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.972137 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d986dc-458e-4bc8-9a09-d2f90e3d888c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"55d986dc-458e-4bc8-9a09-d2f90e3d888c\") " pod="openstack/nova-scheduler-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.972196 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k88xk\" (UniqueName: \"kubernetes.io/projected/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6-kube-api-access-k88xk\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.972238 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9mhh\" (UniqueName: \"kubernetes.io/projected/55d986dc-458e-4bc8-9a09-d2f90e3d888c-kube-api-access-j9mhh\") pod \"nova-scheduler-0\" (UID: \"55d986dc-458e-4bc8-9a09-d2f90e3d888c\") " pod="openstack/nova-scheduler-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.972271 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.972306 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d986dc-458e-4bc8-9a09-d2f90e3d888c-config-data\") pod \"nova-scheduler-0\" (UID: \"55d986dc-458e-4bc8-9a09-d2f90e3d888c\") " pod="openstack/nova-scheduler-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.972394 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.976268 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.977058 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d986dc-458e-4bc8-9a09-d2f90e3d888c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"55d986dc-458e-4bc8-9a09-d2f90e3d888c\") " pod="openstack/nova-scheduler-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.978358 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.984320 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 07 03:52:49 crc kubenswrapper[4980]: I0107 03:52:49.984949 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d986dc-458e-4bc8-9a09-d2f90e3d888c-config-data\") pod \"nova-scheduler-0\" (UID: \"55d986dc-458e-4bc8-9a09-d2f90e3d888c\") " pod="openstack/nova-scheduler-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.009995 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.018967 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9mhh\" (UniqueName: \"kubernetes.io/projected/55d986dc-458e-4bc8-9a09-d2f90e3d888c-kube-api-access-j9mhh\") pod \"nova-scheduler-0\" (UID: \"55d986dc-458e-4bc8-9a09-d2f90e3d888c\") " pod="openstack/nova-scheduler-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.041965 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pm66v" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.073363 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-rgkl2"] Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.075100 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.077288 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k88xk\" (UniqueName: \"kubernetes.io/projected/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6-kube-api-access-k88xk\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.077373 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.077432 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2553a50-7ad4-497f-a127-cfb7a2394ec1-config-data\") pod \"nova-metadata-0\" (UID: \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\") " pod="openstack/nova-metadata-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.077450 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2553a50-7ad4-497f-a127-cfb7a2394ec1-logs\") pod \"nova-metadata-0\" (UID: \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\") " pod="openstack/nova-metadata-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.077502 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.077518 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2553a50-7ad4-497f-a127-cfb7a2394ec1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\") " pod="openstack/nova-metadata-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.077534 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8ldt\" (UniqueName: \"kubernetes.io/projected/b2553a50-7ad4-497f-a127-cfb7a2394ec1-kube-api-access-b8ldt\") pod \"nova-metadata-0\" (UID: \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\") " pod="openstack/nova-metadata-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.084777 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.087195 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.090064 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-rgkl2"] Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.106863 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k88xk\" (UniqueName: \"kubernetes.io/projected/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6-kube-api-access-k88xk\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.179302 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk5h5\" (UniqueName: \"kubernetes.io/projected/5532d293-9182-4446-b2db-619e1af161c4-kube-api-access-nk5h5\") pod \"dnsmasq-dns-757b4f8459-rgkl2\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.179394 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-dns-svc\") pod \"dnsmasq-dns-757b4f8459-rgkl2\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.179416 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2553a50-7ad4-497f-a127-cfb7a2394ec1-config-data\") pod \"nova-metadata-0\" (UID: \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\") " pod="openstack/nova-metadata-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.179431 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2553a50-7ad4-497f-a127-cfb7a2394ec1-logs\") pod \"nova-metadata-0\" (UID: \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\") " pod="openstack/nova-metadata-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.179463 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-rgkl2\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.179486 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-rgkl2\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.179526 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2553a50-7ad4-497f-a127-cfb7a2394ec1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\") " pod="openstack/nova-metadata-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.179543 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8ldt\" (UniqueName: \"kubernetes.io/projected/b2553a50-7ad4-497f-a127-cfb7a2394ec1-kube-api-access-b8ldt\") pod \"nova-metadata-0\" (UID: \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\") " pod="openstack/nova-metadata-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.179575 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-config\") pod \"dnsmasq-dns-757b4f8459-rgkl2\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.179602 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-rgkl2\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.181269 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2553a50-7ad4-497f-a127-cfb7a2394ec1-logs\") pod \"nova-metadata-0\" (UID: \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\") " pod="openstack/nova-metadata-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.184084 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2553a50-7ad4-497f-a127-cfb7a2394ec1-config-data\") pod \"nova-metadata-0\" (UID: \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\") " pod="openstack/nova-metadata-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.193542 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2553a50-7ad4-497f-a127-cfb7a2394ec1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\") " pod="openstack/nova-metadata-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.195798 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8ldt\" (UniqueName: \"kubernetes.io/projected/b2553a50-7ad4-497f-a127-cfb7a2394ec1-kube-api-access-b8ldt\") pod \"nova-metadata-0\" (UID: \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\") " pod="openstack/nova-metadata-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.217397 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.281963 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk5h5\" (UniqueName: \"kubernetes.io/projected/5532d293-9182-4446-b2db-619e1af161c4-kube-api-access-nk5h5\") pod \"dnsmasq-dns-757b4f8459-rgkl2\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.282066 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-dns-svc\") pod \"dnsmasq-dns-757b4f8459-rgkl2\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.282134 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-rgkl2\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.282163 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-rgkl2\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.282225 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-config\") pod \"dnsmasq-dns-757b4f8459-rgkl2\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.282282 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-rgkl2\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.283583 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-rgkl2\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.284927 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-dns-svc\") pod \"dnsmasq-dns-757b4f8459-rgkl2\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.285155 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-rgkl2\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.285463 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-rgkl2\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.285687 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-config\") pod \"dnsmasq-dns-757b4f8459-rgkl2\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.295126 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.305240 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk5h5\" (UniqueName: \"kubernetes.io/projected/5532d293-9182-4446-b2db-619e1af161c4-kube-api-access-nk5h5\") pod \"dnsmasq-dns-757b4f8459-rgkl2\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.321382 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.441911 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.485796 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 07 03:52:50 crc kubenswrapper[4980]: W0107 03:52:50.522080 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea7912ee_f40e_44eb_be4b_f6c074a6db2c.slice/crio-7f40a5e21dcc7e6f9d1e13c81bbeaf906f79f21e5ea6b0af68c9680458f3a5f7 WatchSource:0}: Error finding container 7f40a5e21dcc7e6f9d1e13c81bbeaf906f79f21e5ea6b0af68c9680458f3a5f7: Status 404 returned error can't find the container with id 7f40a5e21dcc7e6f9d1e13c81bbeaf906f79f21e5ea6b0af68c9680458f3a5f7 Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.544698 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sg7zf"] Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.546172 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sg7zf" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.548950 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.550905 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.558964 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sg7zf"] Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.633028 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pm66v"] Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.694026 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/762f824e-4099-41b1-ab8a-e20b9773b8a9-scripts\") pod \"nova-cell1-conductor-db-sync-sg7zf\" (UID: \"762f824e-4099-41b1-ab8a-e20b9773b8a9\") " pod="openstack/nova-cell1-conductor-db-sync-sg7zf" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.694073 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzkhm\" (UniqueName: \"kubernetes.io/projected/762f824e-4099-41b1-ab8a-e20b9773b8a9-kube-api-access-vzkhm\") pod \"nova-cell1-conductor-db-sync-sg7zf\" (UID: \"762f824e-4099-41b1-ab8a-e20b9773b8a9\") " pod="openstack/nova-cell1-conductor-db-sync-sg7zf" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.694145 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/762f824e-4099-41b1-ab8a-e20b9773b8a9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sg7zf\" (UID: \"762f824e-4099-41b1-ab8a-e20b9773b8a9\") " pod="openstack/nova-cell1-conductor-db-sync-sg7zf" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.694255 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/762f824e-4099-41b1-ab8a-e20b9773b8a9-config-data\") pod \"nova-cell1-conductor-db-sync-sg7zf\" (UID: \"762f824e-4099-41b1-ab8a-e20b9773b8a9\") " pod="openstack/nova-cell1-conductor-db-sync-sg7zf" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.736165 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 07 03:52:50 crc kubenswrapper[4980]: W0107 03:52:50.766529 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55d986dc_458e_4bc8_9a09_d2f90e3d888c.slice/crio-00c474b6c8838cb14bd99be2f4fd0e761fd76c5cb7e71d9cf17c0d63219733ed WatchSource:0}: Error finding container 00c474b6c8838cb14bd99be2f4fd0e761fd76c5cb7e71d9cf17c0d63219733ed: Status 404 returned error can't find the container with id 00c474b6c8838cb14bd99be2f4fd0e761fd76c5cb7e71d9cf17c0d63219733ed Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.797301 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/762f824e-4099-41b1-ab8a-e20b9773b8a9-config-data\") pod \"nova-cell1-conductor-db-sync-sg7zf\" (UID: \"762f824e-4099-41b1-ab8a-e20b9773b8a9\") " pod="openstack/nova-cell1-conductor-db-sync-sg7zf" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.797386 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/762f824e-4099-41b1-ab8a-e20b9773b8a9-scripts\") pod \"nova-cell1-conductor-db-sync-sg7zf\" (UID: \"762f824e-4099-41b1-ab8a-e20b9773b8a9\") " pod="openstack/nova-cell1-conductor-db-sync-sg7zf" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.797415 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzkhm\" (UniqueName: \"kubernetes.io/projected/762f824e-4099-41b1-ab8a-e20b9773b8a9-kube-api-access-vzkhm\") pod \"nova-cell1-conductor-db-sync-sg7zf\" (UID: \"762f824e-4099-41b1-ab8a-e20b9773b8a9\") " pod="openstack/nova-cell1-conductor-db-sync-sg7zf" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.797500 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/762f824e-4099-41b1-ab8a-e20b9773b8a9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sg7zf\" (UID: \"762f824e-4099-41b1-ab8a-e20b9773b8a9\") " pod="openstack/nova-cell1-conductor-db-sync-sg7zf" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.802169 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/762f824e-4099-41b1-ab8a-e20b9773b8a9-scripts\") pod \"nova-cell1-conductor-db-sync-sg7zf\" (UID: \"762f824e-4099-41b1-ab8a-e20b9773b8a9\") " pod="openstack/nova-cell1-conductor-db-sync-sg7zf" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.802461 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/762f824e-4099-41b1-ab8a-e20b9773b8a9-config-data\") pod \"nova-cell1-conductor-db-sync-sg7zf\" (UID: \"762f824e-4099-41b1-ab8a-e20b9773b8a9\") " pod="openstack/nova-cell1-conductor-db-sync-sg7zf" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.802863 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/762f824e-4099-41b1-ab8a-e20b9773b8a9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sg7zf\" (UID: \"762f824e-4099-41b1-ab8a-e20b9773b8a9\") " pod="openstack/nova-cell1-conductor-db-sync-sg7zf" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.814682 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzkhm\" (UniqueName: \"kubernetes.io/projected/762f824e-4099-41b1-ab8a-e20b9773b8a9-kube-api-access-vzkhm\") pod \"nova-cell1-conductor-db-sync-sg7zf\" (UID: \"762f824e-4099-41b1-ab8a-e20b9773b8a9\") " pod="openstack/nova-cell1-conductor-db-sync-sg7zf" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.894155 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sg7zf" Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.960309 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-rgkl2"] Jan 07 03:52:50 crc kubenswrapper[4980]: W0107 03:52:50.974779 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5532d293_9182_4446_b2db_619e1af161c4.slice/crio-15ef04d9b316710222d6f82a0e3bf9bd30ead2a08b6871a9f43f57b107f4bec6 WatchSource:0}: Error finding container 15ef04d9b316710222d6f82a0e3bf9bd30ead2a08b6871a9f43f57b107f4bec6: Status 404 returned error can't find the container with id 15ef04d9b316710222d6f82a0e3bf9bd30ead2a08b6871a9f43f57b107f4bec6 Jan 07 03:52:50 crc kubenswrapper[4980]: I0107 03:52:50.980592 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:52:51 crc kubenswrapper[4980]: I0107 03:52:51.011779 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 07 03:52:51 crc kubenswrapper[4980]: I0107 03:52:51.358953 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sg7zf"] Jan 07 03:52:51 crc kubenswrapper[4980]: W0107 03:52:51.360216 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod762f824e_4099_41b1_ab8a_e20b9773b8a9.slice/crio-0fa7187b78c4cb5779db6148506feb21e21b63eeccb75673a9617dcffaaa3675 WatchSource:0}: Error finding container 0fa7187b78c4cb5779db6148506feb21e21b63eeccb75673a9617dcffaaa3675: Status 404 returned error can't find the container with id 0fa7187b78c4cb5779db6148506feb21e21b63eeccb75673a9617dcffaaa3675 Jan 07 03:52:51 crc kubenswrapper[4980]: I0107 03:52:51.539207 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ea7912ee-f40e-44eb-be4b-f6c074a6db2c","Type":"ContainerStarted","Data":"7f40a5e21dcc7e6f9d1e13c81bbeaf906f79f21e5ea6b0af68c9680458f3a5f7"} Jan 07 03:52:51 crc kubenswrapper[4980]: I0107 03:52:51.541600 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" event={"ID":"5532d293-9182-4446-b2db-619e1af161c4","Type":"ContainerStarted","Data":"32c961cfd1a424af31ba752f24576c915f6b405b2eb0fc4db207d845031164c2"} Jan 07 03:52:51 crc kubenswrapper[4980]: I0107 03:52:51.541640 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" event={"ID":"5532d293-9182-4446-b2db-619e1af161c4","Type":"ContainerStarted","Data":"15ef04d9b316710222d6f82a0e3bf9bd30ead2a08b6871a9f43f57b107f4bec6"} Jan 07 03:52:51 crc kubenswrapper[4980]: I0107 03:52:51.545240 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b2553a50-7ad4-497f-a127-cfb7a2394ec1","Type":"ContainerStarted","Data":"6b54378d782579e4a63bf6135efbecae24bb89974b740b048edd844696803736"} Jan 07 03:52:51 crc kubenswrapper[4980]: I0107 03:52:51.546344 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6","Type":"ContainerStarted","Data":"816274d8bdc013d562ede0c8c92d8321f85be6dcc8f914bb18347f32bcb5cedd"} Jan 07 03:52:51 crc kubenswrapper[4980]: I0107 03:52:51.548450 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55d986dc-458e-4bc8-9a09-d2f90e3d888c","Type":"ContainerStarted","Data":"00c474b6c8838cb14bd99be2f4fd0e761fd76c5cb7e71d9cf17c0d63219733ed"} Jan 07 03:52:51 crc kubenswrapper[4980]: I0107 03:52:51.551486 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sg7zf" event={"ID":"762f824e-4099-41b1-ab8a-e20b9773b8a9","Type":"ContainerStarted","Data":"0fa7187b78c4cb5779db6148506feb21e21b63eeccb75673a9617dcffaaa3675"} Jan 07 03:52:51 crc kubenswrapper[4980]: I0107 03:52:51.553161 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pm66v" event={"ID":"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8","Type":"ContainerStarted","Data":"4dbe64f9ca82a1e834cecba63a09d7a009e3b4cb22f25c38063ba33000e34dde"} Jan 07 03:52:51 crc kubenswrapper[4980]: I0107 03:52:51.553187 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pm66v" event={"ID":"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8","Type":"ContainerStarted","Data":"b2d386cbd2ffd8bdc4b606cbfaddcb76f5cf9b4cd1b938834aec39fd7947aecf"} Jan 07 03:52:51 crc kubenswrapper[4980]: I0107 03:52:51.624202 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-sg7zf" podStartSLOduration=1.624179484 podStartE2EDuration="1.624179484s" podCreationTimestamp="2026-01-07 03:52:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:52:51.621105679 +0000 UTC m=+1218.186800414" watchObservedRunningTime="2026-01-07 03:52:51.624179484 +0000 UTC m=+1218.189874269" Jan 07 03:52:51 crc kubenswrapper[4980]: I0107 03:52:51.634585 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-pm66v" podStartSLOduration=2.633731622 podStartE2EDuration="2.633731622s" podCreationTimestamp="2026-01-07 03:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:52:51.602419105 +0000 UTC m=+1218.168113850" watchObservedRunningTime="2026-01-07 03:52:51.633731622 +0000 UTC m=+1218.199426347" Jan 07 03:52:52 crc kubenswrapper[4980]: I0107 03:52:52.580885 4980 generic.go:334] "Generic (PLEG): container finished" podID="5532d293-9182-4446-b2db-619e1af161c4" containerID="32c961cfd1a424af31ba752f24576c915f6b405b2eb0fc4db207d845031164c2" exitCode=0 Jan 07 03:52:52 crc kubenswrapper[4980]: I0107 03:52:52.580975 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" event={"ID":"5532d293-9182-4446-b2db-619e1af161c4","Type":"ContainerDied","Data":"32c961cfd1a424af31ba752f24576c915f6b405b2eb0fc4db207d845031164c2"} Jan 07 03:52:52 crc kubenswrapper[4980]: I0107 03:52:52.587604 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sg7zf" event={"ID":"762f824e-4099-41b1-ab8a-e20b9773b8a9","Type":"ContainerStarted","Data":"b564609c3ab74803d93f65393773b9eea0fd3ecbe9ce176c569e62102baf26bb"} Jan 07 03:52:53 crc kubenswrapper[4980]: I0107 03:52:53.228650 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:52:53 crc kubenswrapper[4980]: I0107 03:52:53.244807 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 07 03:52:55 crc kubenswrapper[4980]: I0107 03:52:55.619461 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6","Type":"ContainerStarted","Data":"85c2491584ac135102dac8b6f4e680e1681297864b7ae61232c23a217d87705b"} Jan 07 03:52:55 crc kubenswrapper[4980]: I0107 03:52:55.619790 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://85c2491584ac135102dac8b6f4e680e1681297864b7ae61232c23a217d87705b" gracePeriod=30 Jan 07 03:52:55 crc kubenswrapper[4980]: I0107 03:52:55.625877 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55d986dc-458e-4bc8-9a09-d2f90e3d888c","Type":"ContainerStarted","Data":"070e6438c35d8f5fbf18c37eced152a58b06eeb4b4bc3f33cd94cc1e9f6a06fc"} Jan 07 03:52:55 crc kubenswrapper[4980]: I0107 03:52:55.628952 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ea7912ee-f40e-44eb-be4b-f6c074a6db2c","Type":"ContainerStarted","Data":"5de656d2ae15b149edee735dac463d4f56987b8d2c029c240d41a07598bf15fc"} Jan 07 03:52:55 crc kubenswrapper[4980]: I0107 03:52:55.629008 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ea7912ee-f40e-44eb-be4b-f6c074a6db2c","Type":"ContainerStarted","Data":"2decb416018dbe366761909df9a72f4d07c3057a46a2e59af4213b159024525f"} Jan 07 03:52:55 crc kubenswrapper[4980]: I0107 03:52:55.630903 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b2553a50-7ad4-497f-a127-cfb7a2394ec1","Type":"ContainerStarted","Data":"dc02ec6c1441c4f8db24d11c183612dbef6f4ba5d4e27864457cae84a9526549"} Jan 07 03:52:55 crc kubenswrapper[4980]: I0107 03:52:55.630930 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b2553a50-7ad4-497f-a127-cfb7a2394ec1","Type":"ContainerStarted","Data":"a1d8170a4d69c0485b015e0ef2a039c28ecb11913d3d41093a660ca8d9966631"} Jan 07 03:52:55 crc kubenswrapper[4980]: I0107 03:52:55.631028 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b2553a50-7ad4-497f-a127-cfb7a2394ec1" containerName="nova-metadata-log" containerID="cri-o://a1d8170a4d69c0485b015e0ef2a039c28ecb11913d3d41093a660ca8d9966631" gracePeriod=30 Jan 07 03:52:55 crc kubenswrapper[4980]: I0107 03:52:55.631089 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b2553a50-7ad4-497f-a127-cfb7a2394ec1" containerName="nova-metadata-metadata" containerID="cri-o://dc02ec6c1441c4f8db24d11c183612dbef6f4ba5d4e27864457cae84a9526549" gracePeriod=30 Jan 07 03:52:55 crc kubenswrapper[4980]: I0107 03:52:55.633653 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" event={"ID":"5532d293-9182-4446-b2db-619e1af161c4","Type":"ContainerStarted","Data":"7ab0573e2f0e4c9045afedb78ad1c2e8f951b0b643c94f4e94c2299e6ed059f7"} Jan 07 03:52:55 crc kubenswrapper[4980]: I0107 03:52:55.634567 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:52:55 crc kubenswrapper[4980]: I0107 03:52:55.642702 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.77052265 podStartE2EDuration="6.642680605s" podCreationTimestamp="2026-01-07 03:52:49 +0000 UTC" firstStartedPulling="2026-01-07 03:52:51.017819985 +0000 UTC m=+1217.583514720" lastFinishedPulling="2026-01-07 03:52:54.88997794 +0000 UTC m=+1221.455672675" observedRunningTime="2026-01-07 03:52:55.63483913 +0000 UTC m=+1222.200533865" watchObservedRunningTime="2026-01-07 03:52:55.642680605 +0000 UTC m=+1222.208375340" Jan 07 03:52:55 crc kubenswrapper[4980]: I0107 03:52:55.658998 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" podStartSLOduration=5.658980473 podStartE2EDuration="5.658980473s" podCreationTimestamp="2026-01-07 03:52:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:52:55.65532843 +0000 UTC m=+1222.221023155" watchObservedRunningTime="2026-01-07 03:52:55.658980473 +0000 UTC m=+1222.224675208" Jan 07 03:52:55 crc kubenswrapper[4980]: I0107 03:52:55.680751 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.825842336 podStartE2EDuration="6.680732632s" podCreationTimestamp="2026-01-07 03:52:49 +0000 UTC" firstStartedPulling="2026-01-07 03:52:51.01220909 +0000 UTC m=+1217.577903825" lastFinishedPulling="2026-01-07 03:52:54.867099386 +0000 UTC m=+1221.432794121" observedRunningTime="2026-01-07 03:52:55.676002094 +0000 UTC m=+1222.241696829" watchObservedRunningTime="2026-01-07 03:52:55.680732632 +0000 UTC m=+1222.246427367" Jan 07 03:52:55 crc kubenswrapper[4980]: I0107 03:52:55.719468 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.430661407 podStartE2EDuration="6.71944958s" podCreationTimestamp="2026-01-07 03:52:49 +0000 UTC" firstStartedPulling="2026-01-07 03:52:50.530222822 +0000 UTC m=+1217.095917557" lastFinishedPulling="2026-01-07 03:52:54.819010995 +0000 UTC m=+1221.384705730" observedRunningTime="2026-01-07 03:52:55.697126064 +0000 UTC m=+1222.262820799" watchObservedRunningTime="2026-01-07 03:52:55.71944958 +0000 UTC m=+1222.285144315" Jan 07 03:52:55 crc kubenswrapper[4980]: I0107 03:52:55.719591 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.679003535 podStartE2EDuration="6.719587825s" podCreationTimestamp="2026-01-07 03:52:49 +0000 UTC" firstStartedPulling="2026-01-07 03:52:50.768459485 +0000 UTC m=+1217.334154230" lastFinishedPulling="2026-01-07 03:52:54.809043785 +0000 UTC m=+1221.374738520" observedRunningTime="2026-01-07 03:52:55.71496732 +0000 UTC m=+1222.280662055" watchObservedRunningTime="2026-01-07 03:52:55.719587825 +0000 UTC m=+1222.285282560" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.306076 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.398609 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2553a50-7ad4-497f-a127-cfb7a2394ec1-combined-ca-bundle\") pod \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\" (UID: \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\") " Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.398737 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2553a50-7ad4-497f-a127-cfb7a2394ec1-config-data\") pod \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\" (UID: \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\") " Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.398802 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8ldt\" (UniqueName: \"kubernetes.io/projected/b2553a50-7ad4-497f-a127-cfb7a2394ec1-kube-api-access-b8ldt\") pod \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\" (UID: \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\") " Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.398914 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2553a50-7ad4-497f-a127-cfb7a2394ec1-logs\") pod \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\" (UID: \"b2553a50-7ad4-497f-a127-cfb7a2394ec1\") " Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.400313 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2553a50-7ad4-497f-a127-cfb7a2394ec1-logs" (OuterVolumeSpecName: "logs") pod "b2553a50-7ad4-497f-a127-cfb7a2394ec1" (UID: "b2553a50-7ad4-497f-a127-cfb7a2394ec1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.404660 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2553a50-7ad4-497f-a127-cfb7a2394ec1-kube-api-access-b8ldt" (OuterVolumeSpecName: "kube-api-access-b8ldt") pod "b2553a50-7ad4-497f-a127-cfb7a2394ec1" (UID: "b2553a50-7ad4-497f-a127-cfb7a2394ec1"). InnerVolumeSpecName "kube-api-access-b8ldt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.427061 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2553a50-7ad4-497f-a127-cfb7a2394ec1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2553a50-7ad4-497f-a127-cfb7a2394ec1" (UID: "b2553a50-7ad4-497f-a127-cfb7a2394ec1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.442216 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2553a50-7ad4-497f-a127-cfb7a2394ec1-config-data" (OuterVolumeSpecName: "config-data") pod "b2553a50-7ad4-497f-a127-cfb7a2394ec1" (UID: "b2553a50-7ad4-497f-a127-cfb7a2394ec1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.501968 4980 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2553a50-7ad4-497f-a127-cfb7a2394ec1-logs\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.502003 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2553a50-7ad4-497f-a127-cfb7a2394ec1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.502015 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2553a50-7ad4-497f-a127-cfb7a2394ec1-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.502025 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8ldt\" (UniqueName: \"kubernetes.io/projected/b2553a50-7ad4-497f-a127-cfb7a2394ec1-kube-api-access-b8ldt\") on node \"crc\" DevicePath \"\"" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.651731 4980 generic.go:334] "Generic (PLEG): container finished" podID="b2553a50-7ad4-497f-a127-cfb7a2394ec1" containerID="dc02ec6c1441c4f8db24d11c183612dbef6f4ba5d4e27864457cae84a9526549" exitCode=0 Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.651762 4980 generic.go:334] "Generic (PLEG): container finished" podID="b2553a50-7ad4-497f-a127-cfb7a2394ec1" containerID="a1d8170a4d69c0485b015e0ef2a039c28ecb11913d3d41093a660ca8d9966631" exitCode=143 Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.652623 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.655647 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b2553a50-7ad4-497f-a127-cfb7a2394ec1","Type":"ContainerDied","Data":"dc02ec6c1441c4f8db24d11c183612dbef6f4ba5d4e27864457cae84a9526549"} Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.655688 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b2553a50-7ad4-497f-a127-cfb7a2394ec1","Type":"ContainerDied","Data":"a1d8170a4d69c0485b015e0ef2a039c28ecb11913d3d41093a660ca8d9966631"} Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.655699 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b2553a50-7ad4-497f-a127-cfb7a2394ec1","Type":"ContainerDied","Data":"6b54378d782579e4a63bf6135efbecae24bb89974b740b048edd844696803736"} Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.655714 4980 scope.go:117] "RemoveContainer" containerID="dc02ec6c1441c4f8db24d11c183612dbef6f4ba5d4e27864457cae84a9526549" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.686526 4980 scope.go:117] "RemoveContainer" containerID="a1d8170a4d69c0485b015e0ef2a039c28ecb11913d3d41093a660ca8d9966631" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.692052 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.703393 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.716806 4980 scope.go:117] "RemoveContainer" containerID="dc02ec6c1441c4f8db24d11c183612dbef6f4ba5d4e27864457cae84a9526549" Jan 07 03:52:56 crc kubenswrapper[4980]: E0107 03:52:56.717936 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc02ec6c1441c4f8db24d11c183612dbef6f4ba5d4e27864457cae84a9526549\": container with ID starting with dc02ec6c1441c4f8db24d11c183612dbef6f4ba5d4e27864457cae84a9526549 not found: ID does not exist" containerID="dc02ec6c1441c4f8db24d11c183612dbef6f4ba5d4e27864457cae84a9526549" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.717998 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc02ec6c1441c4f8db24d11c183612dbef6f4ba5d4e27864457cae84a9526549"} err="failed to get container status \"dc02ec6c1441c4f8db24d11c183612dbef6f4ba5d4e27864457cae84a9526549\": rpc error: code = NotFound desc = could not find container \"dc02ec6c1441c4f8db24d11c183612dbef6f4ba5d4e27864457cae84a9526549\": container with ID starting with dc02ec6c1441c4f8db24d11c183612dbef6f4ba5d4e27864457cae84a9526549 not found: ID does not exist" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.718035 4980 scope.go:117] "RemoveContainer" containerID="a1d8170a4d69c0485b015e0ef2a039c28ecb11913d3d41093a660ca8d9966631" Jan 07 03:52:56 crc kubenswrapper[4980]: E0107 03:52:56.720686 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1d8170a4d69c0485b015e0ef2a039c28ecb11913d3d41093a660ca8d9966631\": container with ID starting with a1d8170a4d69c0485b015e0ef2a039c28ecb11913d3d41093a660ca8d9966631 not found: ID does not exist" containerID="a1d8170a4d69c0485b015e0ef2a039c28ecb11913d3d41093a660ca8d9966631" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.720788 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d8170a4d69c0485b015e0ef2a039c28ecb11913d3d41093a660ca8d9966631"} err="failed to get container status \"a1d8170a4d69c0485b015e0ef2a039c28ecb11913d3d41093a660ca8d9966631\": rpc error: code = NotFound desc = could not find container \"a1d8170a4d69c0485b015e0ef2a039c28ecb11913d3d41093a660ca8d9966631\": container with ID starting with a1d8170a4d69c0485b015e0ef2a039c28ecb11913d3d41093a660ca8d9966631 not found: ID does not exist" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.720848 4980 scope.go:117] "RemoveContainer" containerID="dc02ec6c1441c4f8db24d11c183612dbef6f4ba5d4e27864457cae84a9526549" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.721302 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc02ec6c1441c4f8db24d11c183612dbef6f4ba5d4e27864457cae84a9526549"} err="failed to get container status \"dc02ec6c1441c4f8db24d11c183612dbef6f4ba5d4e27864457cae84a9526549\": rpc error: code = NotFound desc = could not find container \"dc02ec6c1441c4f8db24d11c183612dbef6f4ba5d4e27864457cae84a9526549\": container with ID starting with dc02ec6c1441c4f8db24d11c183612dbef6f4ba5d4e27864457cae84a9526549 not found: ID does not exist" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.721335 4980 scope.go:117] "RemoveContainer" containerID="a1d8170a4d69c0485b015e0ef2a039c28ecb11913d3d41093a660ca8d9966631" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.721741 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d8170a4d69c0485b015e0ef2a039c28ecb11913d3d41093a660ca8d9966631"} err="failed to get container status \"a1d8170a4d69c0485b015e0ef2a039c28ecb11913d3d41093a660ca8d9966631\": rpc error: code = NotFound desc = could not find container \"a1d8170a4d69c0485b015e0ef2a039c28ecb11913d3d41093a660ca8d9966631\": container with ID starting with a1d8170a4d69c0485b015e0ef2a039c28ecb11913d3d41093a660ca8d9966631 not found: ID does not exist" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.725237 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:52:56 crc kubenswrapper[4980]: E0107 03:52:56.725746 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2553a50-7ad4-497f-a127-cfb7a2394ec1" containerName="nova-metadata-metadata" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.725769 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2553a50-7ad4-497f-a127-cfb7a2394ec1" containerName="nova-metadata-metadata" Jan 07 03:52:56 crc kubenswrapper[4980]: E0107 03:52:56.725798 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2553a50-7ad4-497f-a127-cfb7a2394ec1" containerName="nova-metadata-log" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.725807 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2553a50-7ad4-497f-a127-cfb7a2394ec1" containerName="nova-metadata-log" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.726074 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2553a50-7ad4-497f-a127-cfb7a2394ec1" containerName="nova-metadata-log" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.726093 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2553a50-7ad4-497f-a127-cfb7a2394ec1" containerName="nova-metadata-metadata" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.728183 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.737513 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.750457 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.751029 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.810211 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkq2p\" (UniqueName: \"kubernetes.io/projected/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-kube-api-access-kkq2p\") pod \"nova-metadata-0\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " pod="openstack/nova-metadata-0" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.810282 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " pod="openstack/nova-metadata-0" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.810360 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-config-data\") pod \"nova-metadata-0\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " pod="openstack/nova-metadata-0" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.810530 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " pod="openstack/nova-metadata-0" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.810583 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-logs\") pod \"nova-metadata-0\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " pod="openstack/nova-metadata-0" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.917535 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " pod="openstack/nova-metadata-0" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.917648 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-logs\") pod \"nova-metadata-0\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " pod="openstack/nova-metadata-0" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.917752 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkq2p\" (UniqueName: \"kubernetes.io/projected/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-kube-api-access-kkq2p\") pod \"nova-metadata-0\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " pod="openstack/nova-metadata-0" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.917797 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " pod="openstack/nova-metadata-0" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.917867 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-config-data\") pod \"nova-metadata-0\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " pod="openstack/nova-metadata-0" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.918613 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-logs\") pod \"nova-metadata-0\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " pod="openstack/nova-metadata-0" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.925034 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " pod="openstack/nova-metadata-0" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.925425 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-config-data\") pod \"nova-metadata-0\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " pod="openstack/nova-metadata-0" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.926339 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " pod="openstack/nova-metadata-0" Jan 07 03:52:56 crc kubenswrapper[4980]: I0107 03:52:56.952800 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkq2p\" (UniqueName: \"kubernetes.io/projected/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-kube-api-access-kkq2p\") pod \"nova-metadata-0\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " pod="openstack/nova-metadata-0" Jan 07 03:52:57 crc kubenswrapper[4980]: I0107 03:52:57.064126 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 07 03:52:57 crc kubenswrapper[4980]: I0107 03:52:57.607812 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:52:57 crc kubenswrapper[4980]: W0107 03:52:57.610339 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod713d5fd3_e5b8_4e11_9e99_5f0ccadfc969.slice/crio-020aaf53b2d032f37c11068f97d3bb907be9e251da81138250c4872014bf7b7f WatchSource:0}: Error finding container 020aaf53b2d032f37c11068f97d3bb907be9e251da81138250c4872014bf7b7f: Status 404 returned error can't find the container with id 020aaf53b2d032f37c11068f97d3bb907be9e251da81138250c4872014bf7b7f Jan 07 03:52:57 crc kubenswrapper[4980]: I0107 03:52:57.680241 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969","Type":"ContainerStarted","Data":"020aaf53b2d032f37c11068f97d3bb907be9e251da81138250c4872014bf7b7f"} Jan 07 03:52:57 crc kubenswrapper[4980]: I0107 03:52:57.751916 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2553a50-7ad4-497f-a127-cfb7a2394ec1" path="/var/lib/kubelet/pods/b2553a50-7ad4-497f-a127-cfb7a2394ec1/volumes" Jan 07 03:52:58 crc kubenswrapper[4980]: I0107 03:52:58.696839 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969","Type":"ContainerStarted","Data":"40b5af37405a11c4f1e380ef7d1fbe89314a8132adeac9364be45ca43abdd38e"} Jan 07 03:52:58 crc kubenswrapper[4980]: I0107 03:52:58.697151 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969","Type":"ContainerStarted","Data":"aa9e058d3a8d03f3c287a3abe85ace10a5cc49b580f7fc7eb39310a6ad90b683"} Jan 07 03:52:58 crc kubenswrapper[4980]: I0107 03:52:58.699100 4980 generic.go:334] "Generic (PLEG): container finished" podID="38b876f0-a9bd-4b8c-97e0-ea68e0c935f8" containerID="4dbe64f9ca82a1e834cecba63a09d7a009e3b4cb22f25c38063ba33000e34dde" exitCode=0 Jan 07 03:52:58 crc kubenswrapper[4980]: I0107 03:52:58.699179 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pm66v" event={"ID":"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8","Type":"ContainerDied","Data":"4dbe64f9ca82a1e834cecba63a09d7a009e3b4cb22f25c38063ba33000e34dde"} Jan 07 03:52:58 crc kubenswrapper[4980]: I0107 03:52:58.738222 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.738192907 podStartE2EDuration="2.738192907s" podCreationTimestamp="2026-01-07 03:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:52:58.722846939 +0000 UTC m=+1225.288541714" watchObservedRunningTime="2026-01-07 03:52:58.738192907 +0000 UTC m=+1225.303887672" Jan 07 03:52:59 crc kubenswrapper[4980]: I0107 03:52:59.714796 4980 generic.go:334] "Generic (PLEG): container finished" podID="762f824e-4099-41b1-ab8a-e20b9773b8a9" containerID="b564609c3ab74803d93f65393773b9eea0fd3ecbe9ce176c569e62102baf26bb" exitCode=0 Jan 07 03:52:59 crc kubenswrapper[4980]: I0107 03:52:59.714959 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sg7zf" event={"ID":"762f824e-4099-41b1-ab8a-e20b9773b8a9","Type":"ContainerDied","Data":"b564609c3ab74803d93f65393773b9eea0fd3ecbe9ce176c569e62102baf26bb"} Jan 07 03:52:59 crc kubenswrapper[4980]: I0107 03:52:59.878542 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 07 03:52:59 crc kubenswrapper[4980]: I0107 03:52:59.878621 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.218632 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.219015 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.262580 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pm66v" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.279126 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.294187 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt8dk\" (UniqueName: \"kubernetes.io/projected/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-kube-api-access-pt8dk\") pod \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\" (UID: \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\") " Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.294302 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-scripts\") pod \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\" (UID: \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\") " Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.294361 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-combined-ca-bundle\") pod \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\" (UID: \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\") " Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.294402 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-config-data\") pod \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\" (UID: \"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8\") " Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.295734 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.307369 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-kube-api-access-pt8dk" (OuterVolumeSpecName: "kube-api-access-pt8dk") pod "38b876f0-a9bd-4b8c-97e0-ea68e0c935f8" (UID: "38b876f0-a9bd-4b8c-97e0-ea68e0c935f8"). InnerVolumeSpecName "kube-api-access-pt8dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.330132 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-scripts" (OuterVolumeSpecName: "scripts") pod "38b876f0-a9bd-4b8c-97e0-ea68e0c935f8" (UID: "38b876f0-a9bd-4b8c-97e0-ea68e0c935f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.352639 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-config-data" (OuterVolumeSpecName: "config-data") pod "38b876f0-a9bd-4b8c-97e0-ea68e0c935f8" (UID: "38b876f0-a9bd-4b8c-97e0-ea68e0c935f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.397662 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38b876f0-a9bd-4b8c-97e0-ea68e0c935f8" (UID: "38b876f0-a9bd-4b8c-97e0-ea68e0c935f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.398369 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.398401 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.398416 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.398429 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt8dk\" (UniqueName: \"kubernetes.io/projected/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8-kube-api-access-pt8dk\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.443760 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.517715 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2b5wd"] Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.518024 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" podUID="e34a3269-fb24-4cc9-9f82-29784752137a" containerName="dnsmasq-dns" containerID="cri-o://c75ffdd141ae1fc5672e21e14b6838abcf68a947b26dd5c4926d2065d8247ef5" gracePeriod=10 Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.723605 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pm66v" event={"ID":"38b876f0-a9bd-4b8c-97e0-ea68e0c935f8","Type":"ContainerDied","Data":"b2d386cbd2ffd8bdc4b606cbfaddcb76f5cf9b4cd1b938834aec39fd7947aecf"} Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.723652 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2d386cbd2ffd8bdc4b606cbfaddcb76f5cf9b4cd1b938834aec39fd7947aecf" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.723715 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pm66v" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.733880 4980 generic.go:334] "Generic (PLEG): container finished" podID="e34a3269-fb24-4cc9-9f82-29784752137a" containerID="c75ffdd141ae1fc5672e21e14b6838abcf68a947b26dd5c4926d2065d8247ef5" exitCode=0 Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.734004 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" event={"ID":"e34a3269-fb24-4cc9-9f82-29784752137a","Type":"ContainerDied","Data":"c75ffdd141ae1fc5672e21e14b6838abcf68a947b26dd5c4926d2065d8247ef5"} Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.804076 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.955565 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.955785 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ea7912ee-f40e-44eb-be4b-f6c074a6db2c" containerName="nova-api-log" containerID="cri-o://2decb416018dbe366761909df9a72f4d07c3057a46a2e59af4213b159024525f" gracePeriod=30 Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.955889 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ea7912ee-f40e-44eb-be4b-f6c074a6db2c" containerName="nova-api-api" containerID="cri-o://5de656d2ae15b149edee735dac463d4f56987b8d2c029c240d41a07598bf15fc" gracePeriod=30 Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.961779 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ea7912ee-f40e-44eb-be4b-f6c074a6db2c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.961886 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ea7912ee-f40e-44eb-be4b-f6c074a6db2c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.988093 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.988317 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="713d5fd3-e5b8-4e11-9e99-5f0ccadfc969" containerName="nova-metadata-log" containerID="cri-o://aa9e058d3a8d03f3c287a3abe85ace10a5cc49b580f7fc7eb39310a6ad90b683" gracePeriod=30 Jan 07 03:53:00 crc kubenswrapper[4980]: I0107 03:53:00.988747 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="713d5fd3-e5b8-4e11-9e99-5f0ccadfc969" containerName="nova-metadata-metadata" containerID="cri-o://40b5af37405a11c4f1e380ef7d1fbe89314a8132adeac9364be45ca43abdd38e" gracePeriod=30 Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.084736 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.125401 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kjb5\" (UniqueName: \"kubernetes.io/projected/e34a3269-fb24-4cc9-9f82-29784752137a-kube-api-access-2kjb5\") pod \"e34a3269-fb24-4cc9-9f82-29784752137a\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.125540 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-dns-swift-storage-0\") pod \"e34a3269-fb24-4cc9-9f82-29784752137a\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.125595 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-ovsdbserver-nb\") pod \"e34a3269-fb24-4cc9-9f82-29784752137a\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.125647 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-ovsdbserver-sb\") pod \"e34a3269-fb24-4cc9-9f82-29784752137a\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.125870 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-dns-svc\") pod \"e34a3269-fb24-4cc9-9f82-29784752137a\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.125906 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-config\") pod \"e34a3269-fb24-4cc9-9f82-29784752137a\" (UID: \"e34a3269-fb24-4cc9-9f82-29784752137a\") " Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.155050 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e34a3269-fb24-4cc9-9f82-29784752137a-kube-api-access-2kjb5" (OuterVolumeSpecName: "kube-api-access-2kjb5") pod "e34a3269-fb24-4cc9-9f82-29784752137a" (UID: "e34a3269-fb24-4cc9-9f82-29784752137a"). InnerVolumeSpecName "kube-api-access-2kjb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.211376 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e34a3269-fb24-4cc9-9f82-29784752137a" (UID: "e34a3269-fb24-4cc9-9f82-29784752137a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.225041 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e34a3269-fb24-4cc9-9f82-29784752137a" (UID: "e34a3269-fb24-4cc9-9f82-29784752137a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.229865 4980 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.229896 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kjb5\" (UniqueName: \"kubernetes.io/projected/e34a3269-fb24-4cc9-9f82-29784752137a-kube-api-access-2kjb5\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.229906 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.235176 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-config" (OuterVolumeSpecName: "config") pod "e34a3269-fb24-4cc9-9f82-29784752137a" (UID: "e34a3269-fb24-4cc9-9f82-29784752137a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.235453 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e34a3269-fb24-4cc9-9f82-29784752137a" (UID: "e34a3269-fb24-4cc9-9f82-29784752137a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.252896 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e34a3269-fb24-4cc9-9f82-29784752137a" (UID: "e34a3269-fb24-4cc9-9f82-29784752137a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.331357 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.331388 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.331399 4980 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e34a3269-fb24-4cc9-9f82-29784752137a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.335304 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.388355 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sg7zf" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.432703 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/762f824e-4099-41b1-ab8a-e20b9773b8a9-config-data\") pod \"762f824e-4099-41b1-ab8a-e20b9773b8a9\" (UID: \"762f824e-4099-41b1-ab8a-e20b9773b8a9\") " Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.432834 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/762f824e-4099-41b1-ab8a-e20b9773b8a9-scripts\") pod \"762f824e-4099-41b1-ab8a-e20b9773b8a9\" (UID: \"762f824e-4099-41b1-ab8a-e20b9773b8a9\") " Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.432941 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzkhm\" (UniqueName: \"kubernetes.io/projected/762f824e-4099-41b1-ab8a-e20b9773b8a9-kube-api-access-vzkhm\") pod \"762f824e-4099-41b1-ab8a-e20b9773b8a9\" (UID: \"762f824e-4099-41b1-ab8a-e20b9773b8a9\") " Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.432986 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/762f824e-4099-41b1-ab8a-e20b9773b8a9-combined-ca-bundle\") pod \"762f824e-4099-41b1-ab8a-e20b9773b8a9\" (UID: \"762f824e-4099-41b1-ab8a-e20b9773b8a9\") " Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.436110 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/762f824e-4099-41b1-ab8a-e20b9773b8a9-scripts" (OuterVolumeSpecName: "scripts") pod "762f824e-4099-41b1-ab8a-e20b9773b8a9" (UID: "762f824e-4099-41b1-ab8a-e20b9773b8a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.437893 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/762f824e-4099-41b1-ab8a-e20b9773b8a9-kube-api-access-vzkhm" (OuterVolumeSpecName: "kube-api-access-vzkhm") pod "762f824e-4099-41b1-ab8a-e20b9773b8a9" (UID: "762f824e-4099-41b1-ab8a-e20b9773b8a9"). InnerVolumeSpecName "kube-api-access-vzkhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.459558 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.459551 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/762f824e-4099-41b1-ab8a-e20b9773b8a9-config-data" (OuterVolumeSpecName: "config-data") pod "762f824e-4099-41b1-ab8a-e20b9773b8a9" (UID: "762f824e-4099-41b1-ab8a-e20b9773b8a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.461595 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/762f824e-4099-41b1-ab8a-e20b9773b8a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "762f824e-4099-41b1-ab8a-e20b9773b8a9" (UID: "762f824e-4099-41b1-ab8a-e20b9773b8a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.533899 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkq2p\" (UniqueName: \"kubernetes.io/projected/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-kube-api-access-kkq2p\") pod \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.534110 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-combined-ca-bundle\") pod \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.534165 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-config-data\") pod \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.534226 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-logs\") pod \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.534261 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-nova-metadata-tls-certs\") pod \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\" (UID: \"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969\") " Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.534641 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/762f824e-4099-41b1-ab8a-e20b9773b8a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.534658 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/762f824e-4099-41b1-ab8a-e20b9773b8a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.534668 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzkhm\" (UniqueName: \"kubernetes.io/projected/762f824e-4099-41b1-ab8a-e20b9773b8a9-kube-api-access-vzkhm\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.534678 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/762f824e-4099-41b1-ab8a-e20b9773b8a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.535020 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-logs" (OuterVolumeSpecName: "logs") pod "713d5fd3-e5b8-4e11-9e99-5f0ccadfc969" (UID: "713d5fd3-e5b8-4e11-9e99-5f0ccadfc969"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.546726 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-kube-api-access-kkq2p" (OuterVolumeSpecName: "kube-api-access-kkq2p") pod "713d5fd3-e5b8-4e11-9e99-5f0ccadfc969" (UID: "713d5fd3-e5b8-4e11-9e99-5f0ccadfc969"). InnerVolumeSpecName "kube-api-access-kkq2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.568958 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-config-data" (OuterVolumeSpecName: "config-data") pod "713d5fd3-e5b8-4e11-9e99-5f0ccadfc969" (UID: "713d5fd3-e5b8-4e11-9e99-5f0ccadfc969"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.580224 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "713d5fd3-e5b8-4e11-9e99-5f0ccadfc969" (UID: "713d5fd3-e5b8-4e11-9e99-5f0ccadfc969"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.593970 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "713d5fd3-e5b8-4e11-9e99-5f0ccadfc969" (UID: "713d5fd3-e5b8-4e11-9e99-5f0ccadfc969"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.636168 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.636564 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.636579 4980 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-logs\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.636613 4980 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.636629 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkq2p\" (UniqueName: \"kubernetes.io/projected/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969-kube-api-access-kkq2p\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.743460 4980 generic.go:334] "Generic (PLEG): container finished" podID="713d5fd3-e5b8-4e11-9e99-5f0ccadfc969" containerID="40b5af37405a11c4f1e380ef7d1fbe89314a8132adeac9364be45ca43abdd38e" exitCode=0 Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.743490 4980 generic.go:334] "Generic (PLEG): container finished" podID="713d5fd3-e5b8-4e11-9e99-5f0ccadfc969" containerID="aa9e058d3a8d03f3c287a3abe85ace10a5cc49b580f7fc7eb39310a6ad90b683" exitCode=143 Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.743590 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.745486 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.758168 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sg7zf" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.762850 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea7912ee-f40e-44eb-be4b-f6c074a6db2c" containerID="2decb416018dbe366761909df9a72f4d07c3057a46a2e59af4213b159024525f" exitCode=143 Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.790230 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969","Type":"ContainerDied","Data":"40b5af37405a11c4f1e380ef7d1fbe89314a8132adeac9364be45ca43abdd38e"} Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.790271 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969","Type":"ContainerDied","Data":"aa9e058d3a8d03f3c287a3abe85ace10a5cc49b580f7fc7eb39310a6ad90b683"} Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.790286 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"713d5fd3-e5b8-4e11-9e99-5f0ccadfc969","Type":"ContainerDied","Data":"020aaf53b2d032f37c11068f97d3bb907be9e251da81138250c4872014bf7b7f"} Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.790299 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2b5wd" event={"ID":"e34a3269-fb24-4cc9-9f82-29784752137a","Type":"ContainerDied","Data":"7ee29ade6a071d7c73ce95e64904edfe81fa83721a220bc95c5192bd1d286dca"} Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.790314 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sg7zf" event={"ID":"762f824e-4099-41b1-ab8a-e20b9773b8a9","Type":"ContainerDied","Data":"0fa7187b78c4cb5779db6148506feb21e21b63eeccb75673a9617dcffaaa3675"} Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.790327 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fa7187b78c4cb5779db6148506feb21e21b63eeccb75673a9617dcffaaa3675" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.790339 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ea7912ee-f40e-44eb-be4b-f6c074a6db2c","Type":"ContainerDied","Data":"2decb416018dbe366761909df9a72f4d07c3057a46a2e59af4213b159024525f"} Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.790361 4980 scope.go:117] "RemoveContainer" containerID="40b5af37405a11c4f1e380ef7d1fbe89314a8132adeac9364be45ca43abdd38e" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.839874 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.866906 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.878797 4980 scope.go:117] "RemoveContainer" containerID="aa9e058d3a8d03f3c287a3abe85ace10a5cc49b580f7fc7eb39310a6ad90b683" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.887248 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:53:01 crc kubenswrapper[4980]: E0107 03:53:01.887696 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762f824e-4099-41b1-ab8a-e20b9773b8a9" containerName="nova-cell1-conductor-db-sync" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.887710 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="762f824e-4099-41b1-ab8a-e20b9773b8a9" containerName="nova-cell1-conductor-db-sync" Jan 07 03:53:01 crc kubenswrapper[4980]: E0107 03:53:01.887734 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34a3269-fb24-4cc9-9f82-29784752137a" containerName="dnsmasq-dns" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.887740 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34a3269-fb24-4cc9-9f82-29784752137a" containerName="dnsmasq-dns" Jan 07 03:53:01 crc kubenswrapper[4980]: E0107 03:53:01.887752 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="713d5fd3-e5b8-4e11-9e99-5f0ccadfc969" containerName="nova-metadata-log" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.887758 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="713d5fd3-e5b8-4e11-9e99-5f0ccadfc969" containerName="nova-metadata-log" Jan 07 03:53:01 crc kubenswrapper[4980]: E0107 03:53:01.887767 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="713d5fd3-e5b8-4e11-9e99-5f0ccadfc969" containerName="nova-metadata-metadata" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.887773 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="713d5fd3-e5b8-4e11-9e99-5f0ccadfc969" containerName="nova-metadata-metadata" Jan 07 03:53:01 crc kubenswrapper[4980]: E0107 03:53:01.887781 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b876f0-a9bd-4b8c-97e0-ea68e0c935f8" containerName="nova-manage" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.887786 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b876f0-a9bd-4b8c-97e0-ea68e0c935f8" containerName="nova-manage" Jan 07 03:53:01 crc kubenswrapper[4980]: E0107 03:53:01.887803 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34a3269-fb24-4cc9-9f82-29784752137a" containerName="init" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.887809 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34a3269-fb24-4cc9-9f82-29784752137a" containerName="init" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.888006 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b876f0-a9bd-4b8c-97e0-ea68e0c935f8" containerName="nova-manage" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.888023 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="762f824e-4099-41b1-ab8a-e20b9773b8a9" containerName="nova-cell1-conductor-db-sync" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.888031 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="713d5fd3-e5b8-4e11-9e99-5f0ccadfc969" containerName="nova-metadata-metadata" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.888051 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="e34a3269-fb24-4cc9-9f82-29784752137a" containerName="dnsmasq-dns" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.888058 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="713d5fd3-e5b8-4e11-9e99-5f0ccadfc969" containerName="nova-metadata-log" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.889416 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.894053 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.894274 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.895998 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2b5wd"] Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.921817 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2b5wd"] Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.933170 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.933725 4980 scope.go:117] "RemoveContainer" containerID="40b5af37405a11c4f1e380ef7d1fbe89314a8132adeac9364be45ca43abdd38e" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.934310 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 07 03:53:01 crc kubenswrapper[4980]: E0107 03:53:01.934396 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40b5af37405a11c4f1e380ef7d1fbe89314a8132adeac9364be45ca43abdd38e\": container with ID starting with 40b5af37405a11c4f1e380ef7d1fbe89314a8132adeac9364be45ca43abdd38e not found: ID does not exist" containerID="40b5af37405a11c4f1e380ef7d1fbe89314a8132adeac9364be45ca43abdd38e" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.934426 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40b5af37405a11c4f1e380ef7d1fbe89314a8132adeac9364be45ca43abdd38e"} err="failed to get container status \"40b5af37405a11c4f1e380ef7d1fbe89314a8132adeac9364be45ca43abdd38e\": rpc error: code = NotFound desc = could not find container \"40b5af37405a11c4f1e380ef7d1fbe89314a8132adeac9364be45ca43abdd38e\": container with ID starting with 40b5af37405a11c4f1e380ef7d1fbe89314a8132adeac9364be45ca43abdd38e not found: ID does not exist" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.934446 4980 scope.go:117] "RemoveContainer" containerID="aa9e058d3a8d03f3c287a3abe85ace10a5cc49b580f7fc7eb39310a6ad90b683" Jan 07 03:53:01 crc kubenswrapper[4980]: E0107 03:53:01.934863 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa9e058d3a8d03f3c287a3abe85ace10a5cc49b580f7fc7eb39310a6ad90b683\": container with ID starting with aa9e058d3a8d03f3c287a3abe85ace10a5cc49b580f7fc7eb39310a6ad90b683 not found: ID does not exist" containerID="aa9e058d3a8d03f3c287a3abe85ace10a5cc49b580f7fc7eb39310a6ad90b683" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.934904 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa9e058d3a8d03f3c287a3abe85ace10a5cc49b580f7fc7eb39310a6ad90b683"} err="failed to get container status \"aa9e058d3a8d03f3c287a3abe85ace10a5cc49b580f7fc7eb39310a6ad90b683\": rpc error: code = NotFound desc = could not find container \"aa9e058d3a8d03f3c287a3abe85ace10a5cc49b580f7fc7eb39310a6ad90b683\": container with ID starting with aa9e058d3a8d03f3c287a3abe85ace10a5cc49b580f7fc7eb39310a6ad90b683 not found: ID does not exist" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.934934 4980 scope.go:117] "RemoveContainer" containerID="40b5af37405a11c4f1e380ef7d1fbe89314a8132adeac9364be45ca43abdd38e" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.936539 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.938992 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.942695 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40b5af37405a11c4f1e380ef7d1fbe89314a8132adeac9364be45ca43abdd38e"} err="failed to get container status \"40b5af37405a11c4f1e380ef7d1fbe89314a8132adeac9364be45ca43abdd38e\": rpc error: code = NotFound desc = could not find container \"40b5af37405a11c4f1e380ef7d1fbe89314a8132adeac9364be45ca43abdd38e\": container with ID starting with 40b5af37405a11c4f1e380ef7d1fbe89314a8132adeac9364be45ca43abdd38e not found: ID does not exist" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.942722 4980 scope.go:117] "RemoveContainer" containerID="aa9e058d3a8d03f3c287a3abe85ace10a5cc49b580f7fc7eb39310a6ad90b683" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.943129 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa9e058d3a8d03f3c287a3abe85ace10a5cc49b580f7fc7eb39310a6ad90b683"} err="failed to get container status \"aa9e058d3a8d03f3c287a3abe85ace10a5cc49b580f7fc7eb39310a6ad90b683\": rpc error: code = NotFound desc = could not find container \"aa9e058d3a8d03f3c287a3abe85ace10a5cc49b580f7fc7eb39310a6ad90b683\": container with ID starting with aa9e058d3a8d03f3c287a3abe85ace10a5cc49b580f7fc7eb39310a6ad90b683 not found: ID does not exist" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.943180 4980 scope.go:117] "RemoveContainer" containerID="c75ffdd141ae1fc5672e21e14b6838abcf68a947b26dd5c4926d2065d8247ef5" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.948799 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.949899 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/51c53ece-42c6-4005-81ea-d51fac7c3c11-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " pod="openstack/nova-metadata-0" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.949935 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c53ece-42c6-4005-81ea-d51fac7c3c11-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " pod="openstack/nova-metadata-0" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.949974 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hslj5\" (UniqueName: \"kubernetes.io/projected/51c53ece-42c6-4005-81ea-d51fac7c3c11-kube-api-access-hslj5\") pod \"nova-metadata-0\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " pod="openstack/nova-metadata-0" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.950027 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c53ece-42c6-4005-81ea-d51fac7c3c11-logs\") pod \"nova-metadata-0\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " pod="openstack/nova-metadata-0" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.950068 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c53ece-42c6-4005-81ea-d51fac7c3c11-config-data\") pod \"nova-metadata-0\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " pod="openstack/nova-metadata-0" Jan 07 03:53:01 crc kubenswrapper[4980]: I0107 03:53:01.965341 4980 scope.go:117] "RemoveContainer" containerID="501edb08c4f717a7a0644e2e379dfe88410b64e74d4423bfba4853318c0bc894" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.051024 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/51c53ece-42c6-4005-81ea-d51fac7c3c11-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " pod="openstack/nova-metadata-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.051074 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c53ece-42c6-4005-81ea-d51fac7c3c11-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " pod="openstack/nova-metadata-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.051115 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hslj5\" (UniqueName: \"kubernetes.io/projected/51c53ece-42c6-4005-81ea-d51fac7c3c11-kube-api-access-hslj5\") pod \"nova-metadata-0\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " pod="openstack/nova-metadata-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.051167 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hb6s\" (UniqueName: \"kubernetes.io/projected/c840061e-cf97-4c53-b581-805806d7343c-kube-api-access-9hb6s\") pod \"nova-cell1-conductor-0\" (UID: \"c840061e-cf97-4c53-b581-805806d7343c\") " pod="openstack/nova-cell1-conductor-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.051187 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c840061e-cf97-4c53-b581-805806d7343c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c840061e-cf97-4c53-b581-805806d7343c\") " pod="openstack/nova-cell1-conductor-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.051206 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c53ece-42c6-4005-81ea-d51fac7c3c11-logs\") pod \"nova-metadata-0\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " pod="openstack/nova-metadata-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.051252 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c53ece-42c6-4005-81ea-d51fac7c3c11-config-data\") pod \"nova-metadata-0\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " pod="openstack/nova-metadata-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.051316 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c840061e-cf97-4c53-b581-805806d7343c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c840061e-cf97-4c53-b581-805806d7343c\") " pod="openstack/nova-cell1-conductor-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.051668 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c53ece-42c6-4005-81ea-d51fac7c3c11-logs\") pod \"nova-metadata-0\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " pod="openstack/nova-metadata-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.056217 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c53ece-42c6-4005-81ea-d51fac7c3c11-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " pod="openstack/nova-metadata-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.056378 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c53ece-42c6-4005-81ea-d51fac7c3c11-config-data\") pod \"nova-metadata-0\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " pod="openstack/nova-metadata-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.059007 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/51c53ece-42c6-4005-81ea-d51fac7c3c11-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " pod="openstack/nova-metadata-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.069392 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hslj5\" (UniqueName: \"kubernetes.io/projected/51c53ece-42c6-4005-81ea-d51fac7c3c11-kube-api-access-hslj5\") pod \"nova-metadata-0\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " pod="openstack/nova-metadata-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.152534 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hb6s\" (UniqueName: \"kubernetes.io/projected/c840061e-cf97-4c53-b581-805806d7343c-kube-api-access-9hb6s\") pod \"nova-cell1-conductor-0\" (UID: \"c840061e-cf97-4c53-b581-805806d7343c\") " pod="openstack/nova-cell1-conductor-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.152604 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c840061e-cf97-4c53-b581-805806d7343c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c840061e-cf97-4c53-b581-805806d7343c\") " pod="openstack/nova-cell1-conductor-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.153459 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c840061e-cf97-4c53-b581-805806d7343c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c840061e-cf97-4c53-b581-805806d7343c\") " pod="openstack/nova-cell1-conductor-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.158987 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c840061e-cf97-4c53-b581-805806d7343c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c840061e-cf97-4c53-b581-805806d7343c\") " pod="openstack/nova-cell1-conductor-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.167436 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c840061e-cf97-4c53-b581-805806d7343c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c840061e-cf97-4c53-b581-805806d7343c\") " pod="openstack/nova-cell1-conductor-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.176660 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hb6s\" (UniqueName: \"kubernetes.io/projected/c840061e-cf97-4c53-b581-805806d7343c-kube-api-access-9hb6s\") pod \"nova-cell1-conductor-0\" (UID: \"c840061e-cf97-4c53-b581-805806d7343c\") " pod="openstack/nova-cell1-conductor-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.221229 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.258137 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.714452 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.775815 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51c53ece-42c6-4005-81ea-d51fac7c3c11","Type":"ContainerStarted","Data":"27d4625816783f4e6c1da9a49124095ef013fe8dc0212710287811f42725d86f"} Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.778793 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="55d986dc-458e-4bc8-9a09-d2f90e3d888c" containerName="nova-scheduler-scheduler" containerID="cri-o://070e6438c35d8f5fbf18c37eced152a58b06eeb4b4bc3f33cd94cc1e9f6a06fc" gracePeriod=30 Jan 07 03:53:02 crc kubenswrapper[4980]: I0107 03:53:02.812238 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 07 03:53:02 crc kubenswrapper[4980]: W0107 03:53:02.820603 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc840061e_cf97_4c53_b581_805806d7343c.slice/crio-9bad51b8a9abc5d8d8a6869d6d3bdff15f1c525a2f2a1ef23b514022e0d58ee4 WatchSource:0}: Error finding container 9bad51b8a9abc5d8d8a6869d6d3bdff15f1c525a2f2a1ef23b514022e0d58ee4: Status 404 returned error can't find the container with id 9bad51b8a9abc5d8d8a6869d6d3bdff15f1c525a2f2a1ef23b514022e0d58ee4 Jan 07 03:53:03 crc kubenswrapper[4980]: I0107 03:53:03.757342 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="713d5fd3-e5b8-4e11-9e99-5f0ccadfc969" path="/var/lib/kubelet/pods/713d5fd3-e5b8-4e11-9e99-5f0ccadfc969/volumes" Jan 07 03:53:03 crc kubenswrapper[4980]: I0107 03:53:03.758646 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e34a3269-fb24-4cc9-9f82-29784752137a" path="/var/lib/kubelet/pods/e34a3269-fb24-4cc9-9f82-29784752137a/volumes" Jan 07 03:53:03 crc kubenswrapper[4980]: I0107 03:53:03.792960 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c840061e-cf97-4c53-b581-805806d7343c","Type":"ContainerStarted","Data":"a365a88dfe6d4a5eee828fe1d57944bb97cc5a05b0a64e0e429c26dfc08d4065"} Jan 07 03:53:03 crc kubenswrapper[4980]: I0107 03:53:03.793013 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c840061e-cf97-4c53-b581-805806d7343c","Type":"ContainerStarted","Data":"9bad51b8a9abc5d8d8a6869d6d3bdff15f1c525a2f2a1ef23b514022e0d58ee4"} Jan 07 03:53:03 crc kubenswrapper[4980]: I0107 03:53:03.794935 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51c53ece-42c6-4005-81ea-d51fac7c3c11","Type":"ContainerStarted","Data":"f84488728aca055259046af054255ce8a45b35a60906100735ff697a8ea28d49"} Jan 07 03:53:03 crc kubenswrapper[4980]: I0107 03:53:03.794990 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51c53ece-42c6-4005-81ea-d51fac7c3c11","Type":"ContainerStarted","Data":"146e7b25472190e284b6bc42bff65a7bd6f1691c08ef1a1489305ee4df5b12dd"} Jan 07 03:53:03 crc kubenswrapper[4980]: I0107 03:53:03.835947 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.83592743 podStartE2EDuration="2.83592743s" podCreationTimestamp="2026-01-07 03:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:53:03.830757809 +0000 UTC m=+1230.396452544" watchObservedRunningTime="2026-01-07 03:53:03.83592743 +0000 UTC m=+1230.401622165" Jan 07 03:53:03 crc kubenswrapper[4980]: I0107 03:53:03.853620 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.853600032 podStartE2EDuration="2.853600032s" podCreationTimestamp="2026-01-07 03:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:53:03.84520963 +0000 UTC m=+1230.410904365" watchObservedRunningTime="2026-01-07 03:53:03.853600032 +0000 UTC m=+1230.419294767" Jan 07 03:53:04 crc kubenswrapper[4980]: I0107 03:53:04.808696 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 07 03:53:05 crc kubenswrapper[4980]: I0107 03:53:05.002931 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 07 03:53:05 crc kubenswrapper[4980]: E0107 03:53:05.220613 4980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="070e6438c35d8f5fbf18c37eced152a58b06eeb4b4bc3f33cd94cc1e9f6a06fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 07 03:53:05 crc kubenswrapper[4980]: E0107 03:53:05.223385 4980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="070e6438c35d8f5fbf18c37eced152a58b06eeb4b4bc3f33cd94cc1e9f6a06fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 07 03:53:05 crc kubenswrapper[4980]: E0107 03:53:05.225028 4980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="070e6438c35d8f5fbf18c37eced152a58b06eeb4b4bc3f33cd94cc1e9f6a06fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 07 03:53:05 crc kubenswrapper[4980]: E0107 03:53:05.225076 4980 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="55d986dc-458e-4bc8-9a09-d2f90e3d888c" containerName="nova-scheduler-scheduler" Jan 07 03:53:06 crc kubenswrapper[4980]: I0107 03:53:06.881950 4980 generic.go:334] "Generic (PLEG): container finished" podID="55d986dc-458e-4bc8-9a09-d2f90e3d888c" containerID="070e6438c35d8f5fbf18c37eced152a58b06eeb4b4bc3f33cd94cc1e9f6a06fc" exitCode=0 Jan 07 03:53:06 crc kubenswrapper[4980]: I0107 03:53:06.882191 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55d986dc-458e-4bc8-9a09-d2f90e3d888c","Type":"ContainerDied","Data":"070e6438c35d8f5fbf18c37eced152a58b06eeb4b4bc3f33cd94cc1e9f6a06fc"} Jan 07 03:53:06 crc kubenswrapper[4980]: I0107 03:53:06.883638 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 07 03:53:06 crc kubenswrapper[4980]: I0107 03:53:06.884532 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea7912ee-f40e-44eb-be4b-f6c074a6db2c" containerID="5de656d2ae15b149edee735dac463d4f56987b8d2c029c240d41a07598bf15fc" exitCode=0 Jan 07 03:53:06 crc kubenswrapper[4980]: I0107 03:53:06.884585 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ea7912ee-f40e-44eb-be4b-f6c074a6db2c","Type":"ContainerDied","Data":"5de656d2ae15b149edee735dac463d4f56987b8d2c029c240d41a07598bf15fc"} Jan 07 03:53:06 crc kubenswrapper[4980]: I0107 03:53:06.884603 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ea7912ee-f40e-44eb-be4b-f6c074a6db2c","Type":"ContainerDied","Data":"7f40a5e21dcc7e6f9d1e13c81bbeaf906f79f21e5ea6b0af68c9680458f3a5f7"} Jan 07 03:53:06 crc kubenswrapper[4980]: I0107 03:53:06.884619 4980 scope.go:117] "RemoveContainer" containerID="5de656d2ae15b149edee735dac463d4f56987b8d2c029c240d41a07598bf15fc" Jan 07 03:53:06 crc kubenswrapper[4980]: I0107 03:53:06.938674 4980 scope.go:117] "RemoveContainer" containerID="2decb416018dbe366761909df9a72f4d07c3057a46a2e59af4213b159024525f" Jan 07 03:53:06 crc kubenswrapper[4980]: I0107 03:53:06.964112 4980 scope.go:117] "RemoveContainer" containerID="5de656d2ae15b149edee735dac463d4f56987b8d2c029c240d41a07598bf15fc" Jan 07 03:53:06 crc kubenswrapper[4980]: E0107 03:53:06.964620 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5de656d2ae15b149edee735dac463d4f56987b8d2c029c240d41a07598bf15fc\": container with ID starting with 5de656d2ae15b149edee735dac463d4f56987b8d2c029c240d41a07598bf15fc not found: ID does not exist" containerID="5de656d2ae15b149edee735dac463d4f56987b8d2c029c240d41a07598bf15fc" Jan 07 03:53:06 crc kubenswrapper[4980]: I0107 03:53:06.964665 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de656d2ae15b149edee735dac463d4f56987b8d2c029c240d41a07598bf15fc"} err="failed to get container status \"5de656d2ae15b149edee735dac463d4f56987b8d2c029c240d41a07598bf15fc\": rpc error: code = NotFound desc = could not find container \"5de656d2ae15b149edee735dac463d4f56987b8d2c029c240d41a07598bf15fc\": container with ID starting with 5de656d2ae15b149edee735dac463d4f56987b8d2c029c240d41a07598bf15fc not found: ID does not exist" Jan 07 03:53:06 crc kubenswrapper[4980]: I0107 03:53:06.964741 4980 scope.go:117] "RemoveContainer" containerID="2decb416018dbe366761909df9a72f4d07c3057a46a2e59af4213b159024525f" Jan 07 03:53:06 crc kubenswrapper[4980]: E0107 03:53:06.965092 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2decb416018dbe366761909df9a72f4d07c3057a46a2e59af4213b159024525f\": container with ID starting with 2decb416018dbe366761909df9a72f4d07c3057a46a2e59af4213b159024525f not found: ID does not exist" containerID="2decb416018dbe366761909df9a72f4d07c3057a46a2e59af4213b159024525f" Jan 07 03:53:06 crc kubenswrapper[4980]: I0107 03:53:06.965125 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2decb416018dbe366761909df9a72f4d07c3057a46a2e59af4213b159024525f"} err="failed to get container status \"2decb416018dbe366761909df9a72f4d07c3057a46a2e59af4213b159024525f\": rpc error: code = NotFound desc = could not find container \"2decb416018dbe366761909df9a72f4d07c3057a46a2e59af4213b159024525f\": container with ID starting with 2decb416018dbe366761909df9a72f4d07c3057a46a2e59af4213b159024525f not found: ID does not exist" Jan 07 03:53:06 crc kubenswrapper[4980]: I0107 03:53:06.969090 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-logs\") pod \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\" (UID: \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\") " Jan 07 03:53:06 crc kubenswrapper[4980]: I0107 03:53:06.969142 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-config-data\") pod \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\" (UID: \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\") " Jan 07 03:53:06 crc kubenswrapper[4980]: I0107 03:53:06.969184 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-combined-ca-bundle\") pod \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\" (UID: \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\") " Jan 07 03:53:06 crc kubenswrapper[4980]: I0107 03:53:06.969208 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45bdt\" (UniqueName: \"kubernetes.io/projected/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-kube-api-access-45bdt\") pod \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\" (UID: \"ea7912ee-f40e-44eb-be4b-f6c074a6db2c\") " Jan 07 03:53:06 crc kubenswrapper[4980]: I0107 03:53:06.970369 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-logs" (OuterVolumeSpecName: "logs") pod "ea7912ee-f40e-44eb-be4b-f6c074a6db2c" (UID: "ea7912ee-f40e-44eb-be4b-f6c074a6db2c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:53:06 crc kubenswrapper[4980]: I0107 03:53:06.974473 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-kube-api-access-45bdt" (OuterVolumeSpecName: "kube-api-access-45bdt") pod "ea7912ee-f40e-44eb-be4b-f6c074a6db2c" (UID: "ea7912ee-f40e-44eb-be4b-f6c074a6db2c"). InnerVolumeSpecName "kube-api-access-45bdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.006346 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-config-data" (OuterVolumeSpecName: "config-data") pod "ea7912ee-f40e-44eb-be4b-f6c074a6db2c" (UID: "ea7912ee-f40e-44eb-be4b-f6c074a6db2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.021755 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea7912ee-f40e-44eb-be4b-f6c074a6db2c" (UID: "ea7912ee-f40e-44eb-be4b-f6c074a6db2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.026882 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.074981 4980 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-logs\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.075022 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.075035 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.075047 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45bdt\" (UniqueName: \"kubernetes.io/projected/ea7912ee-f40e-44eb-be4b-f6c074a6db2c-kube-api-access-45bdt\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.176981 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d986dc-458e-4bc8-9a09-d2f90e3d888c-config-data\") pod \"55d986dc-458e-4bc8-9a09-d2f90e3d888c\" (UID: \"55d986dc-458e-4bc8-9a09-d2f90e3d888c\") " Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.177077 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9mhh\" (UniqueName: \"kubernetes.io/projected/55d986dc-458e-4bc8-9a09-d2f90e3d888c-kube-api-access-j9mhh\") pod \"55d986dc-458e-4bc8-9a09-d2f90e3d888c\" (UID: \"55d986dc-458e-4bc8-9a09-d2f90e3d888c\") " Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.177245 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d986dc-458e-4bc8-9a09-d2f90e3d888c-combined-ca-bundle\") pod \"55d986dc-458e-4bc8-9a09-d2f90e3d888c\" (UID: \"55d986dc-458e-4bc8-9a09-d2f90e3d888c\") " Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.192024 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d986dc-458e-4bc8-9a09-d2f90e3d888c-kube-api-access-j9mhh" (OuterVolumeSpecName: "kube-api-access-j9mhh") pod "55d986dc-458e-4bc8-9a09-d2f90e3d888c" (UID: "55d986dc-458e-4bc8-9a09-d2f90e3d888c"). InnerVolumeSpecName "kube-api-access-j9mhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.220677 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d986dc-458e-4bc8-9a09-d2f90e3d888c-config-data" (OuterVolumeSpecName: "config-data") pod "55d986dc-458e-4bc8-9a09-d2f90e3d888c" (UID: "55d986dc-458e-4bc8-9a09-d2f90e3d888c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.221822 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.223244 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.227184 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d986dc-458e-4bc8-9a09-d2f90e3d888c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55d986dc-458e-4bc8-9a09-d2f90e3d888c" (UID: "55d986dc-458e-4bc8-9a09-d2f90e3d888c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.279830 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9mhh\" (UniqueName: \"kubernetes.io/projected/55d986dc-458e-4bc8-9a09-d2f90e3d888c-kube-api-access-j9mhh\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.279866 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d986dc-458e-4bc8-9a09-d2f90e3d888c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.279877 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d986dc-458e-4bc8-9a09-d2f90e3d888c-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.897397 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.899463 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55d986dc-458e-4bc8-9a09-d2f90e3d888c","Type":"ContainerDied","Data":"00c474b6c8838cb14bd99be2f4fd0e761fd76c5cb7e71d9cf17c0d63219733ed"} Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.899500 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.899538 4980 scope.go:117] "RemoveContainer" containerID="070e6438c35d8f5fbf18c37eced152a58b06eeb4b4bc3f33cd94cc1e9f6a06fc" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.933075 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.945754 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.960825 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.988811 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.999284 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 07 03:53:07 crc kubenswrapper[4980]: E0107 03:53:07.999686 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d986dc-458e-4bc8-9a09-d2f90e3d888c" containerName="nova-scheduler-scheduler" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.999704 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d986dc-458e-4bc8-9a09-d2f90e3d888c" containerName="nova-scheduler-scheduler" Jan 07 03:53:07 crc kubenswrapper[4980]: E0107 03:53:07.999740 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7912ee-f40e-44eb-be4b-f6c074a6db2c" containerName="nova-api-log" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.999747 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7912ee-f40e-44eb-be4b-f6c074a6db2c" containerName="nova-api-log" Jan 07 03:53:07 crc kubenswrapper[4980]: E0107 03:53:07.999757 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7912ee-f40e-44eb-be4b-f6c074a6db2c" containerName="nova-api-api" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.999762 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7912ee-f40e-44eb-be4b-f6c074a6db2c" containerName="nova-api-api" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.999919 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="55d986dc-458e-4bc8-9a09-d2f90e3d888c" containerName="nova-scheduler-scheduler" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.999941 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea7912ee-f40e-44eb-be4b-f6c074a6db2c" containerName="nova-api-api" Jan 07 03:53:07 crc kubenswrapper[4980]: I0107 03:53:07.999951 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea7912ee-f40e-44eb-be4b-f6c074a6db2c" containerName="nova-api-log" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.000940 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.003917 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.007798 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.030725 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.031933 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.033907 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.039143 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.195763 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f5f859-60d9-4e3f-a515-a123893dd5d0-config-data\") pod \"nova-api-0\" (UID: \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\") " pod="openstack/nova-api-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.195911 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0650f850-3e0d-4d89-a785-4d34664267ef-config-data\") pod \"nova-scheduler-0\" (UID: \"0650f850-3e0d-4d89-a785-4d34664267ef\") " pod="openstack/nova-scheduler-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.196020 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8f5f859-60d9-4e3f-a515-a123893dd5d0-logs\") pod \"nova-api-0\" (UID: \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\") " pod="openstack/nova-api-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.196215 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9pgn\" (UniqueName: \"kubernetes.io/projected/0650f850-3e0d-4d89-a785-4d34664267ef-kube-api-access-p9pgn\") pod \"nova-scheduler-0\" (UID: \"0650f850-3e0d-4d89-a785-4d34664267ef\") " pod="openstack/nova-scheduler-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.196367 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0650f850-3e0d-4d89-a785-4d34664267ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0650f850-3e0d-4d89-a785-4d34664267ef\") " pod="openstack/nova-scheduler-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.196447 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvd7w\" (UniqueName: \"kubernetes.io/projected/e8f5f859-60d9-4e3f-a515-a123893dd5d0-kube-api-access-wvd7w\") pod \"nova-api-0\" (UID: \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\") " pod="openstack/nova-api-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.196622 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f5f859-60d9-4e3f-a515-a123893dd5d0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\") " pod="openstack/nova-api-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.298300 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f5f859-60d9-4e3f-a515-a123893dd5d0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\") " pod="openstack/nova-api-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.298852 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f5f859-60d9-4e3f-a515-a123893dd5d0-config-data\") pod \"nova-api-0\" (UID: \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\") " pod="openstack/nova-api-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.299105 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0650f850-3e0d-4d89-a785-4d34664267ef-config-data\") pod \"nova-scheduler-0\" (UID: \"0650f850-3e0d-4d89-a785-4d34664267ef\") " pod="openstack/nova-scheduler-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.299200 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8f5f859-60d9-4e3f-a515-a123893dd5d0-logs\") pod \"nova-api-0\" (UID: \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\") " pod="openstack/nova-api-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.299276 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9pgn\" (UniqueName: \"kubernetes.io/projected/0650f850-3e0d-4d89-a785-4d34664267ef-kube-api-access-p9pgn\") pod \"nova-scheduler-0\" (UID: \"0650f850-3e0d-4d89-a785-4d34664267ef\") " pod="openstack/nova-scheduler-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.299340 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0650f850-3e0d-4d89-a785-4d34664267ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0650f850-3e0d-4d89-a785-4d34664267ef\") " pod="openstack/nova-scheduler-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.299389 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvd7w\" (UniqueName: \"kubernetes.io/projected/e8f5f859-60d9-4e3f-a515-a123893dd5d0-kube-api-access-wvd7w\") pod \"nova-api-0\" (UID: \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\") " pod="openstack/nova-api-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.300068 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8f5f859-60d9-4e3f-a515-a123893dd5d0-logs\") pod \"nova-api-0\" (UID: \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\") " pod="openstack/nova-api-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.306387 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0650f850-3e0d-4d89-a785-4d34664267ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0650f850-3e0d-4d89-a785-4d34664267ef\") " pod="openstack/nova-scheduler-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.306409 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f5f859-60d9-4e3f-a515-a123893dd5d0-config-data\") pod \"nova-api-0\" (UID: \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\") " pod="openstack/nova-api-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.306395 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0650f850-3e0d-4d89-a785-4d34664267ef-config-data\") pod \"nova-scheduler-0\" (UID: \"0650f850-3e0d-4d89-a785-4d34664267ef\") " pod="openstack/nova-scheduler-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.306473 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f5f859-60d9-4e3f-a515-a123893dd5d0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\") " pod="openstack/nova-api-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.330220 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvd7w\" (UniqueName: \"kubernetes.io/projected/e8f5f859-60d9-4e3f-a515-a123893dd5d0-kube-api-access-wvd7w\") pod \"nova-api-0\" (UID: \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\") " pod="openstack/nova-api-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.331627 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9pgn\" (UniqueName: \"kubernetes.io/projected/0650f850-3e0d-4d89-a785-4d34664267ef-kube-api-access-p9pgn\") pod \"nova-scheduler-0\" (UID: \"0650f850-3e0d-4d89-a785-4d34664267ef\") " pod="openstack/nova-scheduler-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.352230 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.580847 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.582214 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="f3fa7e62-6ab5-4edb-9311-9b49a85c766b" containerName="kube-state-metrics" containerID="cri-o://51f38411bc53cb1db055bab6f3da7535a99d2f5ed034348c25f0d72319b43023" gracePeriod=30 Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.622504 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.837483 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.919251 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0650f850-3e0d-4d89-a785-4d34664267ef","Type":"ContainerStarted","Data":"c12c186ee1898f70d0a0fb7be99e133cc8e2c872a0d539202b1cec87c8f133ab"} Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.922528 4980 generic.go:334] "Generic (PLEG): container finished" podID="f3fa7e62-6ab5-4edb-9311-9b49a85c766b" containerID="51f38411bc53cb1db055bab6f3da7535a99d2f5ed034348c25f0d72319b43023" exitCode=2 Jan 07 03:53:08 crc kubenswrapper[4980]: I0107 03:53:08.922621 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f3fa7e62-6ab5-4edb-9311-9b49a85c766b","Type":"ContainerDied","Data":"51f38411bc53cb1db055bab6f3da7535a99d2f5ed034348c25f0d72319b43023"} Jan 07 03:53:09 crc kubenswrapper[4980]: I0107 03:53:09.117535 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 07 03:53:09 crc kubenswrapper[4980]: I0107 03:53:09.186005 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 07 03:53:09 crc kubenswrapper[4980]: I0107 03:53:09.318427 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggfh9\" (UniqueName: \"kubernetes.io/projected/f3fa7e62-6ab5-4edb-9311-9b49a85c766b-kube-api-access-ggfh9\") pod \"f3fa7e62-6ab5-4edb-9311-9b49a85c766b\" (UID: \"f3fa7e62-6ab5-4edb-9311-9b49a85c766b\") " Jan 07 03:53:09 crc kubenswrapper[4980]: I0107 03:53:09.323512 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3fa7e62-6ab5-4edb-9311-9b49a85c766b-kube-api-access-ggfh9" (OuterVolumeSpecName: "kube-api-access-ggfh9") pod "f3fa7e62-6ab5-4edb-9311-9b49a85c766b" (UID: "f3fa7e62-6ab5-4edb-9311-9b49a85c766b"). InnerVolumeSpecName "kube-api-access-ggfh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:53:09 crc kubenswrapper[4980]: I0107 03:53:09.421663 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggfh9\" (UniqueName: \"kubernetes.io/projected/f3fa7e62-6ab5-4edb-9311-9b49a85c766b-kube-api-access-ggfh9\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:09 crc kubenswrapper[4980]: I0107 03:53:09.762614 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55d986dc-458e-4bc8-9a09-d2f90e3d888c" path="/var/lib/kubelet/pods/55d986dc-458e-4bc8-9a09-d2f90e3d888c/volumes" Jan 07 03:53:09 crc kubenswrapper[4980]: I0107 03:53:09.763175 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea7912ee-f40e-44eb-be4b-f6c074a6db2c" path="/var/lib/kubelet/pods/ea7912ee-f40e-44eb-be4b-f6c074a6db2c/volumes" Jan 07 03:53:09 crc kubenswrapper[4980]: I0107 03:53:09.936264 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8f5f859-60d9-4e3f-a515-a123893dd5d0","Type":"ContainerStarted","Data":"58d0f8520056882093c4dbf77f5740fe0262df7c8b31021ba90857a26afa0573"} Jan 07 03:53:09 crc kubenswrapper[4980]: I0107 03:53:09.936311 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8f5f859-60d9-4e3f-a515-a123893dd5d0","Type":"ContainerStarted","Data":"f85cceae98b73cc8598c2db6c1c6dc71028479a350a99633f63c847392e7e1a1"} Jan 07 03:53:09 crc kubenswrapper[4980]: I0107 03:53:09.936321 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8f5f859-60d9-4e3f-a515-a123893dd5d0","Type":"ContainerStarted","Data":"7f476a940d8323246367b06c9889788842fbf33dfb4c6d9be0dacb71f97ba136"} Jan 07 03:53:09 crc kubenswrapper[4980]: I0107 03:53:09.940673 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f3fa7e62-6ab5-4edb-9311-9b49a85c766b","Type":"ContainerDied","Data":"7e89d93e568ddeceee7f8c2f95a9f69f7b003e235b53dc1c7926c1ca33fc0cc7"} Jan 07 03:53:09 crc kubenswrapper[4980]: I0107 03:53:09.940731 4980 scope.go:117] "RemoveContainer" containerID="51f38411bc53cb1db055bab6f3da7535a99d2f5ed034348c25f0d72319b43023" Jan 07 03:53:09 crc kubenswrapper[4980]: I0107 03:53:09.940860 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 07 03:53:09 crc kubenswrapper[4980]: I0107 03:53:09.943971 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0650f850-3e0d-4d89-a785-4d34664267ef","Type":"ContainerStarted","Data":"5bc7e0c08cfc6f0e62c502576b252aa91232b6a8bc837cfbc58619f734f9926a"} Jan 07 03:53:09 crc kubenswrapper[4980]: I0107 03:53:09.957209 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.957177294 podStartE2EDuration="2.957177294s" podCreationTimestamp="2026-01-07 03:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:53:09.95577354 +0000 UTC m=+1236.521468285" watchObservedRunningTime="2026-01-07 03:53:09.957177294 +0000 UTC m=+1236.522872029" Jan 07 03:53:09 crc kubenswrapper[4980]: I0107 03:53:09.988595 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.988574103 podStartE2EDuration="2.988574103s" podCreationTimestamp="2026-01-07 03:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:53:09.969420606 +0000 UTC m=+1236.535115351" watchObservedRunningTime="2026-01-07 03:53:09.988574103 +0000 UTC m=+1236.554268838" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.003542 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.013840 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.027874 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 07 03:53:10 crc kubenswrapper[4980]: E0107 03:53:10.028680 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fa7e62-6ab5-4edb-9311-9b49a85c766b" containerName="kube-state-metrics" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.028721 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fa7e62-6ab5-4edb-9311-9b49a85c766b" containerName="kube-state-metrics" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.029057 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3fa7e62-6ab5-4edb-9311-9b49a85c766b" containerName="kube-state-metrics" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.030161 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.032518 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.033849 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.036029 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.132978 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/96c1a5f6-5439-4aa4-a1c0-27408fbbe977-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"96c1a5f6-5439-4aa4-a1c0-27408fbbe977\") " pod="openstack/kube-state-metrics-0" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.133666 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c1a5f6-5439-4aa4-a1c0-27408fbbe977-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"96c1a5f6-5439-4aa4-a1c0-27408fbbe977\") " pod="openstack/kube-state-metrics-0" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.133701 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmr64\" (UniqueName: \"kubernetes.io/projected/96c1a5f6-5439-4aa4-a1c0-27408fbbe977-kube-api-access-lmr64\") pod \"kube-state-metrics-0\" (UID: \"96c1a5f6-5439-4aa4-a1c0-27408fbbe977\") " pod="openstack/kube-state-metrics-0" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.134489 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c1a5f6-5439-4aa4-a1c0-27408fbbe977-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"96c1a5f6-5439-4aa4-a1c0-27408fbbe977\") " pod="openstack/kube-state-metrics-0" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.236424 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/96c1a5f6-5439-4aa4-a1c0-27408fbbe977-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"96c1a5f6-5439-4aa4-a1c0-27408fbbe977\") " pod="openstack/kube-state-metrics-0" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.236510 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c1a5f6-5439-4aa4-a1c0-27408fbbe977-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"96c1a5f6-5439-4aa4-a1c0-27408fbbe977\") " pod="openstack/kube-state-metrics-0" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.236593 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmr64\" (UniqueName: \"kubernetes.io/projected/96c1a5f6-5439-4aa4-a1c0-27408fbbe977-kube-api-access-lmr64\") pod \"kube-state-metrics-0\" (UID: \"96c1a5f6-5439-4aa4-a1c0-27408fbbe977\") " pod="openstack/kube-state-metrics-0" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.236688 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c1a5f6-5439-4aa4-a1c0-27408fbbe977-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"96c1a5f6-5439-4aa4-a1c0-27408fbbe977\") " pod="openstack/kube-state-metrics-0" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.244614 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/96c1a5f6-5439-4aa4-a1c0-27408fbbe977-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"96c1a5f6-5439-4aa4-a1c0-27408fbbe977\") " pod="openstack/kube-state-metrics-0" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.245545 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c1a5f6-5439-4aa4-a1c0-27408fbbe977-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"96c1a5f6-5439-4aa4-a1c0-27408fbbe977\") " pod="openstack/kube-state-metrics-0" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.247747 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c1a5f6-5439-4aa4-a1c0-27408fbbe977-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"96c1a5f6-5439-4aa4-a1c0-27408fbbe977\") " pod="openstack/kube-state-metrics-0" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.275908 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmr64\" (UniqueName: \"kubernetes.io/projected/96c1a5f6-5439-4aa4-a1c0-27408fbbe977-kube-api-access-lmr64\") pod \"kube-state-metrics-0\" (UID: \"96c1a5f6-5439-4aa4-a1c0-27408fbbe977\") " pod="openstack/kube-state-metrics-0" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.346941 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.785591 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.786423 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerName="ceilometer-central-agent" containerID="cri-o://3ed8266c2272f3bb7b2810804caf80d22867fbe9efa4d9d0beaedc1151883b5d" gracePeriod=30 Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.786484 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerName="proxy-httpd" containerID="cri-o://e4629353bf5f5fa1562af2db98c989b1e848336d906925ffcd0e6a1d5b5ec7ae" gracePeriod=30 Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.786500 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerName="sg-core" containerID="cri-o://46bf3ccf201c786e81ef65b74b15f50eabd1becb918424bd98329974a087f9ce" gracePeriod=30 Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.786536 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerName="ceilometer-notification-agent" containerID="cri-o://d59c2e0f541ef6199bf8b88f34fc752de1c5eb9c041461be70ff1be5cb276666" gracePeriod=30 Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.848193 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 07 03:53:10 crc kubenswrapper[4980]: W0107 03:53:10.855331 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96c1a5f6_5439_4aa4_a1c0_27408fbbe977.slice/crio-6b68874753e4d3f367ea867f7d9b90ac956d9dbd42d5328322ad5a890393e847 WatchSource:0}: Error finding container 6b68874753e4d3f367ea867f7d9b90ac956d9dbd42d5328322ad5a890393e847: Status 404 returned error can't find the container with id 6b68874753e4d3f367ea867f7d9b90ac956d9dbd42d5328322ad5a890393e847 Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.957931 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"96c1a5f6-5439-4aa4-a1c0-27408fbbe977","Type":"ContainerStarted","Data":"6b68874753e4d3f367ea867f7d9b90ac956d9dbd42d5328322ad5a890393e847"} Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.964462 4980 generic.go:334] "Generic (PLEG): container finished" podID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerID="e4629353bf5f5fa1562af2db98c989b1e848336d906925ffcd0e6a1d5b5ec7ae" exitCode=0 Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.964496 4980 generic.go:334] "Generic (PLEG): container finished" podID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerID="46bf3ccf201c786e81ef65b74b15f50eabd1becb918424bd98329974a087f9ce" exitCode=2 Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.964570 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66329f69-013c-4533-9ca3-1d4a9fe9073c","Type":"ContainerDied","Data":"e4629353bf5f5fa1562af2db98c989b1e848336d906925ffcd0e6a1d5b5ec7ae"} Jan 07 03:53:10 crc kubenswrapper[4980]: I0107 03:53:10.964622 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66329f69-013c-4533-9ca3-1d4a9fe9073c","Type":"ContainerDied","Data":"46bf3ccf201c786e81ef65b74b15f50eabd1becb918424bd98329974a087f9ce"} Jan 07 03:53:11 crc kubenswrapper[4980]: I0107 03:53:11.755533 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3fa7e62-6ab5-4edb-9311-9b49a85c766b" path="/var/lib/kubelet/pods/f3fa7e62-6ab5-4edb-9311-9b49a85c766b/volumes" Jan 07 03:53:11 crc kubenswrapper[4980]: I0107 03:53:11.983166 4980 generic.go:334] "Generic (PLEG): container finished" podID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerID="3ed8266c2272f3bb7b2810804caf80d22867fbe9efa4d9d0beaedc1151883b5d" exitCode=0 Jan 07 03:53:11 crc kubenswrapper[4980]: I0107 03:53:11.983240 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66329f69-013c-4533-9ca3-1d4a9fe9073c","Type":"ContainerDied","Data":"3ed8266c2272f3bb7b2810804caf80d22867fbe9efa4d9d0beaedc1151883b5d"} Jan 07 03:53:11 crc kubenswrapper[4980]: I0107 03:53:11.985433 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"96c1a5f6-5439-4aa4-a1c0-27408fbbe977","Type":"ContainerStarted","Data":"d961e1b2bbfb398eb32ac8882fcf4c3a48dc0bdc0a38bb001315a83b688b34ad"} Jan 07 03:53:11 crc kubenswrapper[4980]: I0107 03:53:11.986143 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 07 03:53:12 crc kubenswrapper[4980]: I0107 03:53:12.002312 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.620887477 podStartE2EDuration="3.002291828s" podCreationTimestamp="2026-01-07 03:53:09 +0000 UTC" firstStartedPulling="2026-01-07 03:53:10.858080628 +0000 UTC m=+1237.423775373" lastFinishedPulling="2026-01-07 03:53:11.239484949 +0000 UTC m=+1237.805179724" observedRunningTime="2026-01-07 03:53:12.001486953 +0000 UTC m=+1238.567181718" watchObservedRunningTime="2026-01-07 03:53:12.002291828 +0000 UTC m=+1238.567986603" Jan 07 03:53:12 crc kubenswrapper[4980]: I0107 03:53:12.222446 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 07 03:53:12 crc kubenswrapper[4980]: I0107 03:53:12.222847 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 07 03:53:12 crc kubenswrapper[4980]: I0107 03:53:12.293094 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 07 03:53:12 crc kubenswrapper[4980]: I0107 03:53:12.994967 4980 generic.go:334] "Generic (PLEG): container finished" podID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerID="d59c2e0f541ef6199bf8b88f34fc752de1c5eb9c041461be70ff1be5cb276666" exitCode=0 Jan 07 03:53:12 crc kubenswrapper[4980]: I0107 03:53:12.995032 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66329f69-013c-4533-9ca3-1d4a9fe9073c","Type":"ContainerDied","Data":"d59c2e0f541ef6199bf8b88f34fc752de1c5eb9c041461be70ff1be5cb276666"} Jan 07 03:53:12 crc kubenswrapper[4980]: I0107 03:53:12.995440 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66329f69-013c-4533-9ca3-1d4a9fe9073c","Type":"ContainerDied","Data":"2e2e735e80fd44eafda1e25371aacc8c9712fd4f8fdfac98c0e763c349bea90f"} Jan 07 03:53:12 crc kubenswrapper[4980]: I0107 03:53:12.995452 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e2e735e80fd44eafda1e25371aacc8c9712fd4f8fdfac98c0e763c349bea90f" Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.038454 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.094103 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-config-data\") pod \"66329f69-013c-4533-9ca3-1d4a9fe9073c\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.095039 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66329f69-013c-4533-9ca3-1d4a9fe9073c-run-httpd\") pod \"66329f69-013c-4533-9ca3-1d4a9fe9073c\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.095185 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8546z\" (UniqueName: \"kubernetes.io/projected/66329f69-013c-4533-9ca3-1d4a9fe9073c-kube-api-access-8546z\") pod \"66329f69-013c-4533-9ca3-1d4a9fe9073c\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.095230 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66329f69-013c-4533-9ca3-1d4a9fe9073c-log-httpd\") pod \"66329f69-013c-4533-9ca3-1d4a9fe9073c\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.095267 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-scripts\") pod \"66329f69-013c-4533-9ca3-1d4a9fe9073c\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.095324 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-sg-core-conf-yaml\") pod \"66329f69-013c-4533-9ca3-1d4a9fe9073c\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.095352 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-combined-ca-bundle\") pod \"66329f69-013c-4533-9ca3-1d4a9fe9073c\" (UID: \"66329f69-013c-4533-9ca3-1d4a9fe9073c\") " Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.095523 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66329f69-013c-4533-9ca3-1d4a9fe9073c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "66329f69-013c-4533-9ca3-1d4a9fe9073c" (UID: "66329f69-013c-4533-9ca3-1d4a9fe9073c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.095862 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66329f69-013c-4533-9ca3-1d4a9fe9073c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "66329f69-013c-4533-9ca3-1d4a9fe9073c" (UID: "66329f69-013c-4533-9ca3-1d4a9fe9073c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.096304 4980 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66329f69-013c-4533-9ca3-1d4a9fe9073c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.096322 4980 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66329f69-013c-4533-9ca3-1d4a9fe9073c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.102283 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66329f69-013c-4533-9ca3-1d4a9fe9073c-kube-api-access-8546z" (OuterVolumeSpecName: "kube-api-access-8546z") pod "66329f69-013c-4533-9ca3-1d4a9fe9073c" (UID: "66329f69-013c-4533-9ca3-1d4a9fe9073c"). InnerVolumeSpecName "kube-api-access-8546z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.110694 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-scripts" (OuterVolumeSpecName: "scripts") pod "66329f69-013c-4533-9ca3-1d4a9fe9073c" (UID: "66329f69-013c-4533-9ca3-1d4a9fe9073c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.170118 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "66329f69-013c-4533-9ca3-1d4a9fe9073c" (UID: "66329f69-013c-4533-9ca3-1d4a9fe9073c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.197795 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.197829 4980 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.197839 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8546z\" (UniqueName: \"kubernetes.io/projected/66329f69-013c-4533-9ca3-1d4a9fe9073c-kube-api-access-8546z\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.206778 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66329f69-013c-4533-9ca3-1d4a9fe9073c" (UID: "66329f69-013c-4533-9ca3-1d4a9fe9073c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.236697 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="51c53ece-42c6-4005-81ea-d51fac7c3c11" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.237022 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="51c53ece-42c6-4005-81ea-d51fac7c3c11" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.269461 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-config-data" (OuterVolumeSpecName: "config-data") pod "66329f69-013c-4533-9ca3-1d4a9fe9073c" (UID: "66329f69-013c-4533-9ca3-1d4a9fe9073c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.299777 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.299817 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66329f69-013c-4533-9ca3-1d4a9fe9073c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:13 crc kubenswrapper[4980]: I0107 03:53:13.353362 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.002218 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.029924 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.041085 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.061734 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:53:14 crc kubenswrapper[4980]: E0107 03:53:14.062218 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerName="sg-core" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.062241 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerName="sg-core" Jan 07 03:53:14 crc kubenswrapper[4980]: E0107 03:53:14.062284 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerName="ceilometer-notification-agent" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.062295 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerName="ceilometer-notification-agent" Jan 07 03:53:14 crc kubenswrapper[4980]: E0107 03:53:14.062307 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerName="proxy-httpd" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.062315 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerName="proxy-httpd" Jan 07 03:53:14 crc kubenswrapper[4980]: E0107 03:53:14.062333 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerName="ceilometer-central-agent" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.062341 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerName="ceilometer-central-agent" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.062583 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerName="sg-core" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.062604 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerName="proxy-httpd" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.062625 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerName="ceilometer-central-agent" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.062639 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="66329f69-013c-4533-9ca3-1d4a9fe9073c" containerName="ceilometer-notification-agent" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.064748 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.067009 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.068074 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.068260 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.091619 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.117242 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.117348 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-run-httpd\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.117372 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-config-data\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.117536 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-scripts\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.117608 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.117652 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nr8c\" (UniqueName: \"kubernetes.io/projected/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-kube-api-access-6nr8c\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.117825 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-log-httpd\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.117851 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.220267 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.220358 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-run-httpd\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.220416 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-config-data\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.220445 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-scripts\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.220800 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-run-httpd\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.221264 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.221303 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nr8c\" (UniqueName: \"kubernetes.io/projected/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-kube-api-access-6nr8c\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.221370 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-log-httpd\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.221386 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.221616 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-log-httpd\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.226289 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.226858 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-scripts\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.228413 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-config-data\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.232210 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.232742 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.238880 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nr8c\" (UniqueName: \"kubernetes.io/projected/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-kube-api-access-6nr8c\") pod \"ceilometer-0\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.393673 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:53:14 crc kubenswrapper[4980]: I0107 03:53:14.837839 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:53:14 crc kubenswrapper[4980]: W0107 03:53:14.842121 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbc51631_9db9_4e1a_9b50_75a4c4ba8762.slice/crio-d98fb630b1c9c13885386568f10de343957564f10316dfae7fbc7888555c6cb6 WatchSource:0}: Error finding container d98fb630b1c9c13885386568f10de343957564f10316dfae7fbc7888555c6cb6: Status 404 returned error can't find the container with id d98fb630b1c9c13885386568f10de343957564f10316dfae7fbc7888555c6cb6 Jan 07 03:53:15 crc kubenswrapper[4980]: I0107 03:53:15.016826 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc51631-9db9-4e1a-9b50-75a4c4ba8762","Type":"ContainerStarted","Data":"d98fb630b1c9c13885386568f10de343957564f10316dfae7fbc7888555c6cb6"} Jan 07 03:53:15 crc kubenswrapper[4980]: I0107 03:53:15.746319 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66329f69-013c-4533-9ca3-1d4a9fe9073c" path="/var/lib/kubelet/pods/66329f69-013c-4533-9ca3-1d4a9fe9073c/volumes" Jan 07 03:53:16 crc kubenswrapper[4980]: I0107 03:53:16.030146 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc51631-9db9-4e1a-9b50-75a4c4ba8762","Type":"ContainerStarted","Data":"fb123deda23029d88fbbeb28a04b311cd8455171309d2471ea2946620d284f11"} Jan 07 03:53:17 crc kubenswrapper[4980]: I0107 03:53:17.045659 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc51631-9db9-4e1a-9b50-75a4c4ba8762","Type":"ContainerStarted","Data":"91c9edd51108af47a634f44c3683f0027a2e7bc5497420dd8345a38b179ae234"} Jan 07 03:53:18 crc kubenswrapper[4980]: I0107 03:53:18.055644 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc51631-9db9-4e1a-9b50-75a4c4ba8762","Type":"ContainerStarted","Data":"f8811483aa6c7c7881100f6a7d6cf8442d77763f3dea2cf45444a1bb77010d92"} Jan 07 03:53:18 crc kubenswrapper[4980]: I0107 03:53:18.354305 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 07 03:53:18 crc kubenswrapper[4980]: I0107 03:53:18.401613 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 07 03:53:18 crc kubenswrapper[4980]: I0107 03:53:18.622974 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 07 03:53:18 crc kubenswrapper[4980]: I0107 03:53:18.624287 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 07 03:53:19 crc kubenswrapper[4980]: I0107 03:53:19.070590 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc51631-9db9-4e1a-9b50-75a4c4ba8762","Type":"ContainerStarted","Data":"018e2028f3c63a46801c713d521d985915f558b1065a2ff6c20917854148aa45"} Jan 07 03:53:19 crc kubenswrapper[4980]: I0107 03:53:19.072016 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 07 03:53:19 crc kubenswrapper[4980]: I0107 03:53:19.108378 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 07 03:53:19 crc kubenswrapper[4980]: I0107 03:53:19.108357 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.261381915 podStartE2EDuration="5.108332807s" podCreationTimestamp="2026-01-07 03:53:14 +0000 UTC" firstStartedPulling="2026-01-07 03:53:14.845441769 +0000 UTC m=+1241.411136504" lastFinishedPulling="2026-01-07 03:53:18.692392651 +0000 UTC m=+1245.258087396" observedRunningTime="2026-01-07 03:53:19.103627431 +0000 UTC m=+1245.669322226" watchObservedRunningTime="2026-01-07 03:53:19.108332807 +0000 UTC m=+1245.674027572" Jan 07 03:53:19 crc kubenswrapper[4980]: I0107 03:53:19.705046 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e8f5f859-60d9-4e3f-a515-a123893dd5d0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 07 03:53:19 crc kubenswrapper[4980]: I0107 03:53:19.705463 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e8f5f859-60d9-4e3f-a515-a123893dd5d0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 07 03:53:20 crc kubenswrapper[4980]: I0107 03:53:20.364858 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 07 03:53:22 crc kubenswrapper[4980]: I0107 03:53:22.238213 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 07 03:53:22 crc kubenswrapper[4980]: I0107 03:53:22.239155 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 07 03:53:22 crc kubenswrapper[4980]: I0107 03:53:22.249977 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 07 03:53:23 crc kubenswrapper[4980]: I0107 03:53:23.127349 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 07 03:53:26 crc kubenswrapper[4980]: I0107 03:53:26.154997 4980 generic.go:334] "Generic (PLEG): container finished" podID="a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6" containerID="85c2491584ac135102dac8b6f4e680e1681297864b7ae61232c23a217d87705b" exitCode=137 Jan 07 03:53:26 crc kubenswrapper[4980]: I0107 03:53:26.155773 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6","Type":"ContainerDied","Data":"85c2491584ac135102dac8b6f4e680e1681297864b7ae61232c23a217d87705b"} Jan 07 03:53:26 crc kubenswrapper[4980]: I0107 03:53:26.672943 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:26 crc kubenswrapper[4980]: I0107 03:53:26.806709 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k88xk\" (UniqueName: \"kubernetes.io/projected/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6-kube-api-access-k88xk\") pod \"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6\" (UID: \"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6\") " Jan 07 03:53:26 crc kubenswrapper[4980]: I0107 03:53:26.806862 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6-combined-ca-bundle\") pod \"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6\" (UID: \"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6\") " Jan 07 03:53:26 crc kubenswrapper[4980]: I0107 03:53:26.806942 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6-config-data\") pod \"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6\" (UID: \"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6\") " Jan 07 03:53:26 crc kubenswrapper[4980]: I0107 03:53:26.816165 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6-kube-api-access-k88xk" (OuterVolumeSpecName: "kube-api-access-k88xk") pod "a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6" (UID: "a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6"). InnerVolumeSpecName "kube-api-access-k88xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:53:26 crc kubenswrapper[4980]: I0107 03:53:26.841489 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6" (UID: "a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:26 crc kubenswrapper[4980]: I0107 03:53:26.842039 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6-config-data" (OuterVolumeSpecName: "config-data") pod "a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6" (UID: "a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:26 crc kubenswrapper[4980]: I0107 03:53:26.910418 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:26 crc kubenswrapper[4980]: I0107 03:53:26.910446 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:26 crc kubenswrapper[4980]: I0107 03:53:26.910456 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k88xk\" (UniqueName: \"kubernetes.io/projected/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6-kube-api-access-k88xk\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.173418 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6","Type":"ContainerDied","Data":"816274d8bdc013d562ede0c8c92d8321f85be6dcc8f914bb18347f32bcb5cedd"} Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.173481 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.173992 4980 scope.go:117] "RemoveContainer" containerID="85c2491584ac135102dac8b6f4e680e1681297864b7ae61232c23a217d87705b" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.243251 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.278258 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.291436 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 07 03:53:27 crc kubenswrapper[4980]: E0107 03:53:27.291927 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6" containerName="nova-cell1-novncproxy-novncproxy" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.291950 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6" containerName="nova-cell1-novncproxy-novncproxy" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.292205 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6" containerName="nova-cell1-novncproxy-novncproxy" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.293024 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.294932 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.295155 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.300149 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.302326 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.420352 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fca998b-28f9-4611-99f7-2cb9f2cb8042-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0fca998b-28f9-4611-99f7-2cb9f2cb8042\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.420443 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fca998b-28f9-4611-99f7-2cb9f2cb8042-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0fca998b-28f9-4611-99f7-2cb9f2cb8042\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.420612 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fca998b-28f9-4611-99f7-2cb9f2cb8042-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0fca998b-28f9-4611-99f7-2cb9f2cb8042\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.420796 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fca998b-28f9-4611-99f7-2cb9f2cb8042-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0fca998b-28f9-4611-99f7-2cb9f2cb8042\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.420870 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zshc7\" (UniqueName: \"kubernetes.io/projected/0fca998b-28f9-4611-99f7-2cb9f2cb8042-kube-api-access-zshc7\") pod \"nova-cell1-novncproxy-0\" (UID: \"0fca998b-28f9-4611-99f7-2cb9f2cb8042\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.522648 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zshc7\" (UniqueName: \"kubernetes.io/projected/0fca998b-28f9-4611-99f7-2cb9f2cb8042-kube-api-access-zshc7\") pod \"nova-cell1-novncproxy-0\" (UID: \"0fca998b-28f9-4611-99f7-2cb9f2cb8042\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.522747 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fca998b-28f9-4611-99f7-2cb9f2cb8042-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0fca998b-28f9-4611-99f7-2cb9f2cb8042\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.522782 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fca998b-28f9-4611-99f7-2cb9f2cb8042-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0fca998b-28f9-4611-99f7-2cb9f2cb8042\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.522801 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fca998b-28f9-4611-99f7-2cb9f2cb8042-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0fca998b-28f9-4611-99f7-2cb9f2cb8042\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.522925 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fca998b-28f9-4611-99f7-2cb9f2cb8042-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0fca998b-28f9-4611-99f7-2cb9f2cb8042\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.529821 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fca998b-28f9-4611-99f7-2cb9f2cb8042-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0fca998b-28f9-4611-99f7-2cb9f2cb8042\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.530466 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fca998b-28f9-4611-99f7-2cb9f2cb8042-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0fca998b-28f9-4611-99f7-2cb9f2cb8042\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.531368 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fca998b-28f9-4611-99f7-2cb9f2cb8042-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0fca998b-28f9-4611-99f7-2cb9f2cb8042\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.535650 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fca998b-28f9-4611-99f7-2cb9f2cb8042-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0fca998b-28f9-4611-99f7-2cb9f2cb8042\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.557186 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zshc7\" (UniqueName: \"kubernetes.io/projected/0fca998b-28f9-4611-99f7-2cb9f2cb8042-kube-api-access-zshc7\") pod \"nova-cell1-novncproxy-0\" (UID: \"0fca998b-28f9-4611-99f7-2cb9f2cb8042\") " pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.616755 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.790670 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6" path="/var/lib/kubelet/pods/a2ba0b9e-8e97-4a71-910d-d9ae783d6ae6/volumes" Jan 07 03:53:27 crc kubenswrapper[4980]: I0107 03:53:27.976971 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 07 03:53:27 crc kubenswrapper[4980]: W0107 03:53:27.981087 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fca998b_28f9_4611_99f7_2cb9f2cb8042.slice/crio-178ac99f03bd6bc18a4452c4b50970b7c12fa3b18557614482865f8ad7e33371 WatchSource:0}: Error finding container 178ac99f03bd6bc18a4452c4b50970b7c12fa3b18557614482865f8ad7e33371: Status 404 returned error can't find the container with id 178ac99f03bd6bc18a4452c4b50970b7c12fa3b18557614482865f8ad7e33371 Jan 07 03:53:28 crc kubenswrapper[4980]: I0107 03:53:28.183523 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0fca998b-28f9-4611-99f7-2cb9f2cb8042","Type":"ContainerStarted","Data":"178ac99f03bd6bc18a4452c4b50970b7c12fa3b18557614482865f8ad7e33371"} Jan 07 03:53:28 crc kubenswrapper[4980]: I0107 03:53:28.628049 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 07 03:53:28 crc kubenswrapper[4980]: I0107 03:53:28.628435 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 07 03:53:28 crc kubenswrapper[4980]: I0107 03:53:28.631275 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 07 03:53:28 crc kubenswrapper[4980]: I0107 03:53:28.634802 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.204515 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0fca998b-28f9-4611-99f7-2cb9f2cb8042","Type":"ContainerStarted","Data":"503c898d4e15aa725b5d031991ff0c6ee0c97bd9b49e9df80ea9b91f87242e18"} Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.205308 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.209093 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.235069 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.235039031 podStartE2EDuration="2.235039031s" podCreationTimestamp="2026-01-07 03:53:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:53:29.232964876 +0000 UTC m=+1255.798659611" watchObservedRunningTime="2026-01-07 03:53:29.235039031 +0000 UTC m=+1255.800733796" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.462597 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-jqrfc"] Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.464488 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.483045 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-jqrfc"] Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.576185 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-jqrfc\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.576560 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-jqrfc\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.576643 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgv9h\" (UniqueName: \"kubernetes.io/projected/76e7cd2b-434d-48f9-8877-1395706691f4-kube-api-access-jgv9h\") pod \"dnsmasq-dns-89c5cd4d5-jqrfc\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.576699 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-config\") pod \"dnsmasq-dns-89c5cd4d5-jqrfc\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.576748 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-jqrfc\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.576950 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-jqrfc\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.678290 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-config\") pod \"dnsmasq-dns-89c5cd4d5-jqrfc\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.678358 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-jqrfc\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.678404 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-jqrfc\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.678444 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-jqrfc\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.678461 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-jqrfc\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.678509 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgv9h\" (UniqueName: \"kubernetes.io/projected/76e7cd2b-434d-48f9-8877-1395706691f4-kube-api-access-jgv9h\") pod \"dnsmasq-dns-89c5cd4d5-jqrfc\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.679706 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-jqrfc\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.679776 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-jqrfc\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.680360 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-jqrfc\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.693423 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-jqrfc\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.695120 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-config\") pod \"dnsmasq-dns-89c5cd4d5-jqrfc\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.701603 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgv9h\" (UniqueName: \"kubernetes.io/projected/76e7cd2b-434d-48f9-8877-1395706691f4-kube-api-access-jgv9h\") pod \"dnsmasq-dns-89c5cd4d5-jqrfc\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:29 crc kubenswrapper[4980]: I0107 03:53:29.794356 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:30 crc kubenswrapper[4980]: W0107 03:53:30.377299 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76e7cd2b_434d_48f9_8877_1395706691f4.slice/crio-c679ef651640597cbdb28c69398e669adfb1c6ad217a04c58d6a6d85b78ef2e1 WatchSource:0}: Error finding container c679ef651640597cbdb28c69398e669adfb1c6ad217a04c58d6a6d85b78ef2e1: Status 404 returned error can't find the container with id c679ef651640597cbdb28c69398e669adfb1c6ad217a04c58d6a6d85b78ef2e1 Jan 07 03:53:30 crc kubenswrapper[4980]: I0107 03:53:30.382771 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-jqrfc"] Jan 07 03:53:31 crc kubenswrapper[4980]: I0107 03:53:31.221956 4980 generic.go:334] "Generic (PLEG): container finished" podID="76e7cd2b-434d-48f9-8877-1395706691f4" containerID="3fd56efc40b629351ca9e5f91e0bbcdc917b6a0b994a94e4bb95f6376c342453" exitCode=0 Jan 07 03:53:31 crc kubenswrapper[4980]: I0107 03:53:31.222054 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" event={"ID":"76e7cd2b-434d-48f9-8877-1395706691f4","Type":"ContainerDied","Data":"3fd56efc40b629351ca9e5f91e0bbcdc917b6a0b994a94e4bb95f6376c342453"} Jan 07 03:53:31 crc kubenswrapper[4980]: I0107 03:53:31.222670 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" event={"ID":"76e7cd2b-434d-48f9-8877-1395706691f4","Type":"ContainerStarted","Data":"c679ef651640597cbdb28c69398e669adfb1c6ad217a04c58d6a6d85b78ef2e1"} Jan 07 03:53:31 crc kubenswrapper[4980]: I0107 03:53:31.275802 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:53:31 crc kubenswrapper[4980]: I0107 03:53:31.276426 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerName="ceilometer-central-agent" containerID="cri-o://fb123deda23029d88fbbeb28a04b311cd8455171309d2471ea2946620d284f11" gracePeriod=30 Jan 07 03:53:31 crc kubenswrapper[4980]: I0107 03:53:31.276773 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerName="proxy-httpd" containerID="cri-o://018e2028f3c63a46801c713d521d985915f558b1065a2ff6c20917854148aa45" gracePeriod=30 Jan 07 03:53:31 crc kubenswrapper[4980]: I0107 03:53:31.276806 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerName="ceilometer-notification-agent" containerID="cri-o://91c9edd51108af47a634f44c3683f0027a2e7bc5497420dd8345a38b179ae234" gracePeriod=30 Jan 07 03:53:31 crc kubenswrapper[4980]: I0107 03:53:31.276824 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerName="sg-core" containerID="cri-o://f8811483aa6c7c7881100f6a7d6cf8442d77763f3dea2cf45444a1bb77010d92" gracePeriod=30 Jan 07 03:53:31 crc kubenswrapper[4980]: I0107 03:53:31.304023 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.197:3000/\": EOF" Jan 07 03:53:31 crc kubenswrapper[4980]: I0107 03:53:31.902108 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.240225 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" event={"ID":"76e7cd2b-434d-48f9-8877-1395706691f4","Type":"ContainerStarted","Data":"1194a3e34a6352a689f25e7333b32840ceca1ba433c13e271ed95f668a4b6392"} Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.240351 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.242976 4980 generic.go:334] "Generic (PLEG): container finished" podID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerID="018e2028f3c63a46801c713d521d985915f558b1065a2ff6c20917854148aa45" exitCode=0 Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.243011 4980 generic.go:334] "Generic (PLEG): container finished" podID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerID="f8811483aa6c7c7881100f6a7d6cf8442d77763f3dea2cf45444a1bb77010d92" exitCode=2 Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.243033 4980 generic.go:334] "Generic (PLEG): container finished" podID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerID="91c9edd51108af47a634f44c3683f0027a2e7bc5497420dd8345a38b179ae234" exitCode=0 Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.243046 4980 generic.go:334] "Generic (PLEG): container finished" podID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerID="fb123deda23029d88fbbeb28a04b311cd8455171309d2471ea2946620d284f11" exitCode=0 Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.243052 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc51631-9db9-4e1a-9b50-75a4c4ba8762","Type":"ContainerDied","Data":"018e2028f3c63a46801c713d521d985915f558b1065a2ff6c20917854148aa45"} Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.243096 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc51631-9db9-4e1a-9b50-75a4c4ba8762","Type":"ContainerDied","Data":"f8811483aa6c7c7881100f6a7d6cf8442d77763f3dea2cf45444a1bb77010d92"} Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.243111 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc51631-9db9-4e1a-9b50-75a4c4ba8762","Type":"ContainerDied","Data":"91c9edd51108af47a634f44c3683f0027a2e7bc5497420dd8345a38b179ae234"} Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.243121 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc51631-9db9-4e1a-9b50-75a4c4ba8762","Type":"ContainerDied","Data":"fb123deda23029d88fbbeb28a04b311cd8455171309d2471ea2946620d284f11"} Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.243232 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e8f5f859-60d9-4e3f-a515-a123893dd5d0" containerName="nova-api-log" containerID="cri-o://f85cceae98b73cc8598c2db6c1c6dc71028479a350a99633f63c847392e7e1a1" gracePeriod=30 Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.243267 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e8f5f859-60d9-4e3f-a515-a123893dd5d0" containerName="nova-api-api" containerID="cri-o://58d0f8520056882093c4dbf77f5740fe0262df7c8b31021ba90857a26afa0573" gracePeriod=30 Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.266797 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" podStartSLOduration=3.26678157 podStartE2EDuration="3.26678157s" podCreationTimestamp="2026-01-07 03:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:53:32.265903213 +0000 UTC m=+1258.831597948" watchObservedRunningTime="2026-01-07 03:53:32.26678157 +0000 UTC m=+1258.832476305" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.305006 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.442211 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-run-httpd\") pod \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.442261 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nr8c\" (UniqueName: \"kubernetes.io/projected/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-kube-api-access-6nr8c\") pod \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.442379 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-config-data\") pod \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.442397 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-ceilometer-tls-certs\") pod \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.442581 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bbc51631-9db9-4e1a-9b50-75a4c4ba8762" (UID: "bbc51631-9db9-4e1a-9b50-75a4c4ba8762"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.443208 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-log-httpd\") pod \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.443242 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-combined-ca-bundle\") pod \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.443305 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-scripts\") pod \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.443345 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-sg-core-conf-yaml\") pod \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\" (UID: \"bbc51631-9db9-4e1a-9b50-75a4c4ba8762\") " Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.443730 4980 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.444829 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bbc51631-9db9-4e1a-9b50-75a4c4ba8762" (UID: "bbc51631-9db9-4e1a-9b50-75a4c4ba8762"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.447939 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-kube-api-access-6nr8c" (OuterVolumeSpecName: "kube-api-access-6nr8c") pod "bbc51631-9db9-4e1a-9b50-75a4c4ba8762" (UID: "bbc51631-9db9-4e1a-9b50-75a4c4ba8762"). InnerVolumeSpecName "kube-api-access-6nr8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.449876 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-scripts" (OuterVolumeSpecName: "scripts") pod "bbc51631-9db9-4e1a-9b50-75a4c4ba8762" (UID: "bbc51631-9db9-4e1a-9b50-75a4c4ba8762"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.476285 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bbc51631-9db9-4e1a-9b50-75a4c4ba8762" (UID: "bbc51631-9db9-4e1a-9b50-75a4c4ba8762"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.504320 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bbc51631-9db9-4e1a-9b50-75a4c4ba8762" (UID: "bbc51631-9db9-4e1a-9b50-75a4c4ba8762"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.534083 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbc51631-9db9-4e1a-9b50-75a4c4ba8762" (UID: "bbc51631-9db9-4e1a-9b50-75a4c4ba8762"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.545260 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.545287 4980 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.545298 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nr8c\" (UniqueName: \"kubernetes.io/projected/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-kube-api-access-6nr8c\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.545307 4980 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.545316 4980 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.545324 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.553900 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-config-data" (OuterVolumeSpecName: "config-data") pod "bbc51631-9db9-4e1a-9b50-75a4c4ba8762" (UID: "bbc51631-9db9-4e1a-9b50-75a4c4ba8762"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.617818 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:32 crc kubenswrapper[4980]: I0107 03:53:32.647514 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc51631-9db9-4e1a-9b50-75a4c4ba8762-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.257741 4980 generic.go:334] "Generic (PLEG): container finished" podID="e8f5f859-60d9-4e3f-a515-a123893dd5d0" containerID="f85cceae98b73cc8598c2db6c1c6dc71028479a350a99633f63c847392e7e1a1" exitCode=143 Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.257835 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8f5f859-60d9-4e3f-a515-a123893dd5d0","Type":"ContainerDied","Data":"f85cceae98b73cc8598c2db6c1c6dc71028479a350a99633f63c847392e7e1a1"} Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.261538 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc51631-9db9-4e1a-9b50-75a4c4ba8762","Type":"ContainerDied","Data":"d98fb630b1c9c13885386568f10de343957564f10316dfae7fbc7888555c6cb6"} Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.261608 4980 scope.go:117] "RemoveContainer" containerID="018e2028f3c63a46801c713d521d985915f558b1065a2ff6c20917854148aa45" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.261673 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.290694 4980 scope.go:117] "RemoveContainer" containerID="f8811483aa6c7c7881100f6a7d6cf8442d77763f3dea2cf45444a1bb77010d92" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.301673 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.316640 4980 scope.go:117] "RemoveContainer" containerID="91c9edd51108af47a634f44c3683f0027a2e7bc5497420dd8345a38b179ae234" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.326135 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.337227 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.337650 4980 scope.go:117] "RemoveContainer" containerID="fb123deda23029d88fbbeb28a04b311cd8455171309d2471ea2946620d284f11" Jan 07 03:53:33 crc kubenswrapper[4980]: E0107 03:53:33.337889 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerName="ceilometer-notification-agent" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.337920 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerName="ceilometer-notification-agent" Jan 07 03:53:33 crc kubenswrapper[4980]: E0107 03:53:33.337958 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerName="proxy-httpd" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.337969 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerName="proxy-httpd" Jan 07 03:53:33 crc kubenswrapper[4980]: E0107 03:53:33.337990 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerName="sg-core" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.338003 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerName="sg-core" Jan 07 03:53:33 crc kubenswrapper[4980]: E0107 03:53:33.338034 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerName="ceilometer-central-agent" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.338043 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerName="ceilometer-central-agent" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.338317 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerName="ceilometer-central-agent" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.338346 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerName="sg-core" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.338359 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerName="proxy-httpd" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.338389 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" containerName="ceilometer-notification-agent" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.340547 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.343854 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.343889 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.344190 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.349208 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.361594 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzkh2\" (UniqueName: \"kubernetes.io/projected/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-kube-api-access-hzkh2\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.361631 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-scripts\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.361689 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.361711 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.361773 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-log-httpd\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.361790 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-config-data\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.361806 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-run-httpd\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.361823 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.463707 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.463754 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.463938 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-log-httpd\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.463973 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-config-data\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.463996 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-run-httpd\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.464018 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.464114 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzkh2\" (UniqueName: \"kubernetes.io/projected/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-kube-api-access-hzkh2\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.464136 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-scripts\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.464507 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-log-httpd\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.465168 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-run-httpd\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.468399 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.468916 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-config-data\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.478793 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.479775 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.481119 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-scripts\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.483014 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzkh2\" (UniqueName: \"kubernetes.io/projected/7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8-kube-api-access-hzkh2\") pod \"ceilometer-0\" (UID: \"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8\") " pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.669040 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 07 03:53:33 crc kubenswrapper[4980]: I0107 03:53:33.764019 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbc51631-9db9-4e1a-9b50-75a4c4ba8762" path="/var/lib/kubelet/pods/bbc51631-9db9-4e1a-9b50-75a4c4ba8762/volumes" Jan 07 03:53:34 crc kubenswrapper[4980]: I0107 03:53:34.141249 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 07 03:53:34 crc kubenswrapper[4980]: W0107 03:53:34.142101 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7aa05cd9_6f30_4fbe_a2b2_fb527752dcf8.slice/crio-3b6def9588ad9447cd3bcc37cc834069df5e225dec0f101c8d7276aca98aefa3 WatchSource:0}: Error finding container 3b6def9588ad9447cd3bcc37cc834069df5e225dec0f101c8d7276aca98aefa3: Status 404 returned error can't find the container with id 3b6def9588ad9447cd3bcc37cc834069df5e225dec0f101c8d7276aca98aefa3 Jan 07 03:53:34 crc kubenswrapper[4980]: I0107 03:53:34.144992 4980 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 03:53:34 crc kubenswrapper[4980]: I0107 03:53:34.277845 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8","Type":"ContainerStarted","Data":"3b6def9588ad9447cd3bcc37cc834069df5e225dec0f101c8d7276aca98aefa3"} Jan 07 03:53:35 crc kubenswrapper[4980]: I0107 03:53:35.954996 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.127948 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvd7w\" (UniqueName: \"kubernetes.io/projected/e8f5f859-60d9-4e3f-a515-a123893dd5d0-kube-api-access-wvd7w\") pod \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\" (UID: \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\") " Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.128349 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8f5f859-60d9-4e3f-a515-a123893dd5d0-logs\") pod \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\" (UID: \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\") " Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.128439 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f5f859-60d9-4e3f-a515-a123893dd5d0-config-data\") pod \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\" (UID: \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\") " Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.128514 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f5f859-60d9-4e3f-a515-a123893dd5d0-combined-ca-bundle\") pod \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\" (UID: \"e8f5f859-60d9-4e3f-a515-a123893dd5d0\") " Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.128828 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8f5f859-60d9-4e3f-a515-a123893dd5d0-logs" (OuterVolumeSpecName: "logs") pod "e8f5f859-60d9-4e3f-a515-a123893dd5d0" (UID: "e8f5f859-60d9-4e3f-a515-a123893dd5d0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.129136 4980 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8f5f859-60d9-4e3f-a515-a123893dd5d0-logs\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.136998 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8f5f859-60d9-4e3f-a515-a123893dd5d0-kube-api-access-wvd7w" (OuterVolumeSpecName: "kube-api-access-wvd7w") pod "e8f5f859-60d9-4e3f-a515-a123893dd5d0" (UID: "e8f5f859-60d9-4e3f-a515-a123893dd5d0"). InnerVolumeSpecName "kube-api-access-wvd7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.171338 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f5f859-60d9-4e3f-a515-a123893dd5d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8f5f859-60d9-4e3f-a515-a123893dd5d0" (UID: "e8f5f859-60d9-4e3f-a515-a123893dd5d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.172315 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f5f859-60d9-4e3f-a515-a123893dd5d0-config-data" (OuterVolumeSpecName: "config-data") pod "e8f5f859-60d9-4e3f-a515-a123893dd5d0" (UID: "e8f5f859-60d9-4e3f-a515-a123893dd5d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.231039 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvd7w\" (UniqueName: \"kubernetes.io/projected/e8f5f859-60d9-4e3f-a515-a123893dd5d0-kube-api-access-wvd7w\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.231080 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f5f859-60d9-4e3f-a515-a123893dd5d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.231093 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f5f859-60d9-4e3f-a515-a123893dd5d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.303975 4980 generic.go:334] "Generic (PLEG): container finished" podID="e8f5f859-60d9-4e3f-a515-a123893dd5d0" containerID="58d0f8520056882093c4dbf77f5740fe0262df7c8b31021ba90857a26afa0573" exitCode=0 Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.304020 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.304042 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8f5f859-60d9-4e3f-a515-a123893dd5d0","Type":"ContainerDied","Data":"58d0f8520056882093c4dbf77f5740fe0262df7c8b31021ba90857a26afa0573"} Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.304070 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8f5f859-60d9-4e3f-a515-a123893dd5d0","Type":"ContainerDied","Data":"7f476a940d8323246367b06c9889788842fbf33dfb4c6d9be0dacb71f97ba136"} Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.304086 4980 scope.go:117] "RemoveContainer" containerID="58d0f8520056882093c4dbf77f5740fe0262df7c8b31021ba90857a26afa0573" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.306829 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8","Type":"ContainerStarted","Data":"250c1f93e94f84c3f79b73eb39b16dc94a7088f74994161880a363dbfe023ef1"} Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.306876 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8","Type":"ContainerStarted","Data":"ec4755a272ab0497d8967f595472d6c5a886700b792f14bd15cfa3301df46abe"} Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.350061 4980 scope.go:117] "RemoveContainer" containerID="f85cceae98b73cc8598c2db6c1c6dc71028479a350a99633f63c847392e7e1a1" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.350521 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.359681 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.378785 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 07 03:53:36 crc kubenswrapper[4980]: E0107 03:53:36.379328 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f5f859-60d9-4e3f-a515-a123893dd5d0" containerName="nova-api-log" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.379350 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f5f859-60d9-4e3f-a515-a123893dd5d0" containerName="nova-api-log" Jan 07 03:53:36 crc kubenswrapper[4980]: E0107 03:53:36.379369 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f5f859-60d9-4e3f-a515-a123893dd5d0" containerName="nova-api-api" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.379380 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f5f859-60d9-4e3f-a515-a123893dd5d0" containerName="nova-api-api" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.379661 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f5f859-60d9-4e3f-a515-a123893dd5d0" containerName="nova-api-log" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.379692 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f5f859-60d9-4e3f-a515-a123893dd5d0" containerName="nova-api-api" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.380948 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.388157 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.388412 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.388585 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.389182 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.401783 4980 scope.go:117] "RemoveContainer" containerID="58d0f8520056882093c4dbf77f5740fe0262df7c8b31021ba90857a26afa0573" Jan 07 03:53:36 crc kubenswrapper[4980]: E0107 03:53:36.402320 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58d0f8520056882093c4dbf77f5740fe0262df7c8b31021ba90857a26afa0573\": container with ID starting with 58d0f8520056882093c4dbf77f5740fe0262df7c8b31021ba90857a26afa0573 not found: ID does not exist" containerID="58d0f8520056882093c4dbf77f5740fe0262df7c8b31021ba90857a26afa0573" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.402452 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58d0f8520056882093c4dbf77f5740fe0262df7c8b31021ba90857a26afa0573"} err="failed to get container status \"58d0f8520056882093c4dbf77f5740fe0262df7c8b31021ba90857a26afa0573\": rpc error: code = NotFound desc = could not find container \"58d0f8520056882093c4dbf77f5740fe0262df7c8b31021ba90857a26afa0573\": container with ID starting with 58d0f8520056882093c4dbf77f5740fe0262df7c8b31021ba90857a26afa0573 not found: ID does not exist" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.402536 4980 scope.go:117] "RemoveContainer" containerID="f85cceae98b73cc8598c2db6c1c6dc71028479a350a99633f63c847392e7e1a1" Jan 07 03:53:36 crc kubenswrapper[4980]: E0107 03:53:36.404625 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f85cceae98b73cc8598c2db6c1c6dc71028479a350a99633f63c847392e7e1a1\": container with ID starting with f85cceae98b73cc8598c2db6c1c6dc71028479a350a99633f63c847392e7e1a1 not found: ID does not exist" containerID="f85cceae98b73cc8598c2db6c1c6dc71028479a350a99633f63c847392e7e1a1" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.404698 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f85cceae98b73cc8598c2db6c1c6dc71028479a350a99633f63c847392e7e1a1"} err="failed to get container status \"f85cceae98b73cc8598c2db6c1c6dc71028479a350a99633f63c847392e7e1a1\": rpc error: code = NotFound desc = could not find container \"f85cceae98b73cc8598c2db6c1c6dc71028479a350a99633f63c847392e7e1a1\": container with ID starting with f85cceae98b73cc8598c2db6c1c6dc71028479a350a99633f63c847392e7e1a1 not found: ID does not exist" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.535065 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cad4f60-a0cf-48f8-aa76-230e58f2e012-logs\") pod \"nova-api-0\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.535137 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.535211 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-public-tls-certs\") pod \"nova-api-0\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.535231 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-config-data\") pod \"nova-api-0\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.535253 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.535277 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blxp7\" (UniqueName: \"kubernetes.io/projected/0cad4f60-a0cf-48f8-aa76-230e58f2e012-kube-api-access-blxp7\") pod \"nova-api-0\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.542995 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.543078 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.637220 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cad4f60-a0cf-48f8-aa76-230e58f2e012-logs\") pod \"nova-api-0\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.637297 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.637330 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-public-tls-certs\") pod \"nova-api-0\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.637352 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-config-data\") pod \"nova-api-0\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.637373 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.637394 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blxp7\" (UniqueName: \"kubernetes.io/projected/0cad4f60-a0cf-48f8-aa76-230e58f2e012-kube-api-access-blxp7\") pod \"nova-api-0\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.637766 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cad4f60-a0cf-48f8-aa76-230e58f2e012-logs\") pod \"nova-api-0\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.641872 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.643671 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-config-data\") pod \"nova-api-0\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.646569 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.654380 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-public-tls-certs\") pod \"nova-api-0\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.662015 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blxp7\" (UniqueName: \"kubernetes.io/projected/0cad4f60-a0cf-48f8-aa76-230e58f2e012-kube-api-access-blxp7\") pod \"nova-api-0\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " pod="openstack/nova-api-0" Jan 07 03:53:36 crc kubenswrapper[4980]: I0107 03:53:36.705027 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 07 03:53:37 crc kubenswrapper[4980]: I0107 03:53:37.175983 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 07 03:53:37 crc kubenswrapper[4980]: I0107 03:53:37.317491 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8","Type":"ContainerStarted","Data":"1ccfed059775acadf0e5d615d5f3b4c9169f0140dc00097b13fd44028daf7869"} Jan 07 03:53:37 crc kubenswrapper[4980]: I0107 03:53:37.318459 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0cad4f60-a0cf-48f8-aa76-230e58f2e012","Type":"ContainerStarted","Data":"da446ecb56d0fb2ce374e15a42b9cb864172947381af1b35640fe91ff278c823"} Jan 07 03:53:37 crc kubenswrapper[4980]: I0107 03:53:37.617065 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:37 crc kubenswrapper[4980]: I0107 03:53:37.636123 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:37 crc kubenswrapper[4980]: I0107 03:53:37.746777 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8f5f859-60d9-4e3f-a515-a123893dd5d0" path="/var/lib/kubelet/pods/e8f5f859-60d9-4e3f-a515-a123893dd5d0/volumes" Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.330495 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0cad4f60-a0cf-48f8-aa76-230e58f2e012","Type":"ContainerStarted","Data":"884473aefe37064196ba8d561a511782f3ada743372fb7a4df05a34f7b137577"} Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.330997 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0cad4f60-a0cf-48f8-aa76-230e58f2e012","Type":"ContainerStarted","Data":"ef80d4f3e38347e39e73a32aa07606f8e9638b417287a2e4eeb70614b05e8de4"} Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.382480 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.3824507759999998 podStartE2EDuration="2.382450776s" podCreationTimestamp="2026-01-07 03:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:53:38.363701552 +0000 UTC m=+1264.929396327" watchObservedRunningTime="2026-01-07 03:53:38.382450776 +0000 UTC m=+1264.948145551" Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.394016 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.632978 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-z444m"] Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.634341 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z444m" Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.638854 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.638880 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.642904 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-z444m"] Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.784141 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966ffc7e-1827-4cea-b4f1-f820b4e41986-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z444m\" (UID: \"966ffc7e-1827-4cea-b4f1-f820b4e41986\") " pod="openstack/nova-cell1-cell-mapping-z444m" Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.784238 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/966ffc7e-1827-4cea-b4f1-f820b4e41986-scripts\") pod \"nova-cell1-cell-mapping-z444m\" (UID: \"966ffc7e-1827-4cea-b4f1-f820b4e41986\") " pod="openstack/nova-cell1-cell-mapping-z444m" Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.784286 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxzsn\" (UniqueName: \"kubernetes.io/projected/966ffc7e-1827-4cea-b4f1-f820b4e41986-kube-api-access-wxzsn\") pod \"nova-cell1-cell-mapping-z444m\" (UID: \"966ffc7e-1827-4cea-b4f1-f820b4e41986\") " pod="openstack/nova-cell1-cell-mapping-z444m" Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.784350 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966ffc7e-1827-4cea-b4f1-f820b4e41986-config-data\") pod \"nova-cell1-cell-mapping-z444m\" (UID: \"966ffc7e-1827-4cea-b4f1-f820b4e41986\") " pod="openstack/nova-cell1-cell-mapping-z444m" Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.886080 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/966ffc7e-1827-4cea-b4f1-f820b4e41986-scripts\") pod \"nova-cell1-cell-mapping-z444m\" (UID: \"966ffc7e-1827-4cea-b4f1-f820b4e41986\") " pod="openstack/nova-cell1-cell-mapping-z444m" Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.886855 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxzsn\" (UniqueName: \"kubernetes.io/projected/966ffc7e-1827-4cea-b4f1-f820b4e41986-kube-api-access-wxzsn\") pod \"nova-cell1-cell-mapping-z444m\" (UID: \"966ffc7e-1827-4cea-b4f1-f820b4e41986\") " pod="openstack/nova-cell1-cell-mapping-z444m" Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.886980 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966ffc7e-1827-4cea-b4f1-f820b4e41986-config-data\") pod \"nova-cell1-cell-mapping-z444m\" (UID: \"966ffc7e-1827-4cea-b4f1-f820b4e41986\") " pod="openstack/nova-cell1-cell-mapping-z444m" Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.887121 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966ffc7e-1827-4cea-b4f1-f820b4e41986-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z444m\" (UID: \"966ffc7e-1827-4cea-b4f1-f820b4e41986\") " pod="openstack/nova-cell1-cell-mapping-z444m" Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.891522 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/966ffc7e-1827-4cea-b4f1-f820b4e41986-scripts\") pod \"nova-cell1-cell-mapping-z444m\" (UID: \"966ffc7e-1827-4cea-b4f1-f820b4e41986\") " pod="openstack/nova-cell1-cell-mapping-z444m" Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.894132 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966ffc7e-1827-4cea-b4f1-f820b4e41986-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z444m\" (UID: \"966ffc7e-1827-4cea-b4f1-f820b4e41986\") " pod="openstack/nova-cell1-cell-mapping-z444m" Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.894191 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966ffc7e-1827-4cea-b4f1-f820b4e41986-config-data\") pod \"nova-cell1-cell-mapping-z444m\" (UID: \"966ffc7e-1827-4cea-b4f1-f820b4e41986\") " pod="openstack/nova-cell1-cell-mapping-z444m" Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.923969 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxzsn\" (UniqueName: \"kubernetes.io/projected/966ffc7e-1827-4cea-b4f1-f820b4e41986-kube-api-access-wxzsn\") pod \"nova-cell1-cell-mapping-z444m\" (UID: \"966ffc7e-1827-4cea-b4f1-f820b4e41986\") " pod="openstack/nova-cell1-cell-mapping-z444m" Jan 07 03:53:38 crc kubenswrapper[4980]: I0107 03:53:38.954777 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z444m" Jan 07 03:53:39 crc kubenswrapper[4980]: I0107 03:53:39.345139 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8","Type":"ContainerStarted","Data":"d6815030bd39eea29eb5b34d5356098acb79c661465c314bd4dea5aba0be7353"} Jan 07 03:53:39 crc kubenswrapper[4980]: I0107 03:53:39.392165 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8732847929999998 podStartE2EDuration="6.392136782s" podCreationTimestamp="2026-01-07 03:53:33 +0000 UTC" firstStartedPulling="2026-01-07 03:53:34.144737642 +0000 UTC m=+1260.710432397" lastFinishedPulling="2026-01-07 03:53:38.663589641 +0000 UTC m=+1265.229284386" observedRunningTime="2026-01-07 03:53:39.375505134 +0000 UTC m=+1265.941199909" watchObservedRunningTime="2026-01-07 03:53:39.392136782 +0000 UTC m=+1265.957831557" Jan 07 03:53:39 crc kubenswrapper[4980]: I0107 03:53:39.441057 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-z444m"] Jan 07 03:53:39 crc kubenswrapper[4980]: I0107 03:53:39.796942 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:53:39 crc kubenswrapper[4980]: I0107 03:53:39.878728 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-rgkl2"] Jan 07 03:53:39 crc kubenswrapper[4980]: I0107 03:53:39.879627 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" podUID="5532d293-9182-4446-b2db-619e1af161c4" containerName="dnsmasq-dns" containerID="cri-o://7ab0573e2f0e4c9045afedb78ad1c2e8f951b0b643c94f4e94c2299e6ed059f7" gracePeriod=10 Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.360662 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z444m" event={"ID":"966ffc7e-1827-4cea-b4f1-f820b4e41986","Type":"ContainerStarted","Data":"477d0f2fec7a8903c93d035b7439e0c5030debaa6613216a8ab7b07e1564298e"} Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.360713 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z444m" event={"ID":"966ffc7e-1827-4cea-b4f1-f820b4e41986","Type":"ContainerStarted","Data":"8959b8398ae9198baf09079b357cc73e5cdd0daf214eb341aa6f68aeed9d80e4"} Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.366105 4980 generic.go:334] "Generic (PLEG): container finished" podID="5532d293-9182-4446-b2db-619e1af161c4" containerID="7ab0573e2f0e4c9045afedb78ad1c2e8f951b0b643c94f4e94c2299e6ed059f7" exitCode=0 Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.367438 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" event={"ID":"5532d293-9182-4446-b2db-619e1af161c4","Type":"ContainerDied","Data":"7ab0573e2f0e4c9045afedb78ad1c2e8f951b0b643c94f4e94c2299e6ed059f7"} Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.367475 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" event={"ID":"5532d293-9182-4446-b2db-619e1af161c4","Type":"ContainerDied","Data":"15ef04d9b316710222d6f82a0e3bf9bd30ead2a08b6871a9f43f57b107f4bec6"} Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.367508 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15ef04d9b316710222d6f82a0e3bf9bd30ead2a08b6871a9f43f57b107f4bec6" Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.367524 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.386024 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-z444m" podStartSLOduration=2.3860043539999998 podStartE2EDuration="2.386004354s" podCreationTimestamp="2026-01-07 03:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:53:40.37367973 +0000 UTC m=+1266.939374475" watchObservedRunningTime="2026-01-07 03:53:40.386004354 +0000 UTC m=+1266.951699099" Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.394367 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.528618 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-dns-svc\") pod \"5532d293-9182-4446-b2db-619e1af161c4\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.528760 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-dns-swift-storage-0\") pod \"5532d293-9182-4446-b2db-619e1af161c4\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.528808 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk5h5\" (UniqueName: \"kubernetes.io/projected/5532d293-9182-4446-b2db-619e1af161c4-kube-api-access-nk5h5\") pod \"5532d293-9182-4446-b2db-619e1af161c4\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.528995 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-ovsdbserver-sb\") pod \"5532d293-9182-4446-b2db-619e1af161c4\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.529029 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-config\") pod \"5532d293-9182-4446-b2db-619e1af161c4\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.529070 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-ovsdbserver-nb\") pod \"5532d293-9182-4446-b2db-619e1af161c4\" (UID: \"5532d293-9182-4446-b2db-619e1af161c4\") " Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.536594 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5532d293-9182-4446-b2db-619e1af161c4-kube-api-access-nk5h5" (OuterVolumeSpecName: "kube-api-access-nk5h5") pod "5532d293-9182-4446-b2db-619e1af161c4" (UID: "5532d293-9182-4446-b2db-619e1af161c4"). InnerVolumeSpecName "kube-api-access-nk5h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.578134 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5532d293-9182-4446-b2db-619e1af161c4" (UID: "5532d293-9182-4446-b2db-619e1af161c4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.579024 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5532d293-9182-4446-b2db-619e1af161c4" (UID: "5532d293-9182-4446-b2db-619e1af161c4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.587084 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5532d293-9182-4446-b2db-619e1af161c4" (UID: "5532d293-9182-4446-b2db-619e1af161c4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.600322 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-config" (OuterVolumeSpecName: "config") pod "5532d293-9182-4446-b2db-619e1af161c4" (UID: "5532d293-9182-4446-b2db-619e1af161c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.631269 4980 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.631308 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk5h5\" (UniqueName: \"kubernetes.io/projected/5532d293-9182-4446-b2db-619e1af161c4-kube-api-access-nk5h5\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.631322 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.631333 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.631344 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.631390 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5532d293-9182-4446-b2db-619e1af161c4" (UID: "5532d293-9182-4446-b2db-619e1af161c4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:53:40 crc kubenswrapper[4980]: I0107 03:53:40.732988 4980 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5532d293-9182-4446-b2db-619e1af161c4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:41 crc kubenswrapper[4980]: I0107 03:53:41.379590 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-rgkl2" Jan 07 03:53:41 crc kubenswrapper[4980]: I0107 03:53:41.439253 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-rgkl2"] Jan 07 03:53:41 crc kubenswrapper[4980]: I0107 03:53:41.448184 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-rgkl2"] Jan 07 03:53:41 crc kubenswrapper[4980]: I0107 03:53:41.751772 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5532d293-9182-4446-b2db-619e1af161c4" path="/var/lib/kubelet/pods/5532d293-9182-4446-b2db-619e1af161c4/volumes" Jan 07 03:53:44 crc kubenswrapper[4980]: I0107 03:53:44.419490 4980 generic.go:334] "Generic (PLEG): container finished" podID="966ffc7e-1827-4cea-b4f1-f820b4e41986" containerID="477d0f2fec7a8903c93d035b7439e0c5030debaa6613216a8ab7b07e1564298e" exitCode=0 Jan 07 03:53:44 crc kubenswrapper[4980]: I0107 03:53:44.419536 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z444m" event={"ID":"966ffc7e-1827-4cea-b4f1-f820b4e41986","Type":"ContainerDied","Data":"477d0f2fec7a8903c93d035b7439e0c5030debaa6613216a8ab7b07e1564298e"} Jan 07 03:53:45 crc kubenswrapper[4980]: I0107 03:53:45.902275 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z444m" Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.067787 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966ffc7e-1827-4cea-b4f1-f820b4e41986-config-data\") pod \"966ffc7e-1827-4cea-b4f1-f820b4e41986\" (UID: \"966ffc7e-1827-4cea-b4f1-f820b4e41986\") " Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.067861 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966ffc7e-1827-4cea-b4f1-f820b4e41986-combined-ca-bundle\") pod \"966ffc7e-1827-4cea-b4f1-f820b4e41986\" (UID: \"966ffc7e-1827-4cea-b4f1-f820b4e41986\") " Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.067901 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/966ffc7e-1827-4cea-b4f1-f820b4e41986-scripts\") pod \"966ffc7e-1827-4cea-b4f1-f820b4e41986\" (UID: \"966ffc7e-1827-4cea-b4f1-f820b4e41986\") " Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.068080 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxzsn\" (UniqueName: \"kubernetes.io/projected/966ffc7e-1827-4cea-b4f1-f820b4e41986-kube-api-access-wxzsn\") pod \"966ffc7e-1827-4cea-b4f1-f820b4e41986\" (UID: \"966ffc7e-1827-4cea-b4f1-f820b4e41986\") " Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.073748 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966ffc7e-1827-4cea-b4f1-f820b4e41986-scripts" (OuterVolumeSpecName: "scripts") pod "966ffc7e-1827-4cea-b4f1-f820b4e41986" (UID: "966ffc7e-1827-4cea-b4f1-f820b4e41986"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.074666 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/966ffc7e-1827-4cea-b4f1-f820b4e41986-kube-api-access-wxzsn" (OuterVolumeSpecName: "kube-api-access-wxzsn") pod "966ffc7e-1827-4cea-b4f1-f820b4e41986" (UID: "966ffc7e-1827-4cea-b4f1-f820b4e41986"). InnerVolumeSpecName "kube-api-access-wxzsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.099599 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966ffc7e-1827-4cea-b4f1-f820b4e41986-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "966ffc7e-1827-4cea-b4f1-f820b4e41986" (UID: "966ffc7e-1827-4cea-b4f1-f820b4e41986"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.117273 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966ffc7e-1827-4cea-b4f1-f820b4e41986-config-data" (OuterVolumeSpecName: "config-data") pod "966ffc7e-1827-4cea-b4f1-f820b4e41986" (UID: "966ffc7e-1827-4cea-b4f1-f820b4e41986"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.170466 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxzsn\" (UniqueName: \"kubernetes.io/projected/966ffc7e-1827-4cea-b4f1-f820b4e41986-kube-api-access-wxzsn\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.170496 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966ffc7e-1827-4cea-b4f1-f820b4e41986-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.170508 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966ffc7e-1827-4cea-b4f1-f820b4e41986-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.170517 4980 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/966ffc7e-1827-4cea-b4f1-f820b4e41986-scripts\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.452119 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z444m" event={"ID":"966ffc7e-1827-4cea-b4f1-f820b4e41986","Type":"ContainerDied","Data":"8959b8398ae9198baf09079b357cc73e5cdd0daf214eb341aa6f68aeed9d80e4"} Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.452188 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8959b8398ae9198baf09079b357cc73e5cdd0daf214eb341aa6f68aeed9d80e4" Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.452237 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z444m" Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.620439 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.620815 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0cad4f60-a0cf-48f8-aa76-230e58f2e012" containerName="nova-api-log" containerID="cri-o://ef80d4f3e38347e39e73a32aa07606f8e9638b417287a2e4eeb70614b05e8de4" gracePeriod=30 Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.622745 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0cad4f60-a0cf-48f8-aa76-230e58f2e012" containerName="nova-api-api" containerID="cri-o://884473aefe37064196ba8d561a511782f3ada743372fb7a4df05a34f7b137577" gracePeriod=30 Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.641654 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.641970 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0650f850-3e0d-4d89-a785-4d34664267ef" containerName="nova-scheduler-scheduler" containerID="cri-o://5bc7e0c08cfc6f0e62c502576b252aa91232b6a8bc837cfbc58619f734f9926a" gracePeriod=30 Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.657623 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.658236 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="51c53ece-42c6-4005-81ea-d51fac7c3c11" containerName="nova-metadata-log" containerID="cri-o://146e7b25472190e284b6bc42bff65a7bd6f1691c08ef1a1489305ee4df5b12dd" gracePeriod=30 Jan 07 03:53:46 crc kubenswrapper[4980]: I0107 03:53:46.658429 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="51c53ece-42c6-4005-81ea-d51fac7c3c11" containerName="nova-metadata-metadata" containerID="cri-o://f84488728aca055259046af054255ce8a45b35a60906100735ff697a8ea28d49" gracePeriod=30 Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.259498 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.406391 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-public-tls-certs\") pod \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.406487 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cad4f60-a0cf-48f8-aa76-230e58f2e012-logs\") pod \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.406646 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-combined-ca-bundle\") pod \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.406727 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-internal-tls-certs\") pod \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.406787 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blxp7\" (UniqueName: \"kubernetes.io/projected/0cad4f60-a0cf-48f8-aa76-230e58f2e012-kube-api-access-blxp7\") pod \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.407584 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cad4f60-a0cf-48f8-aa76-230e58f2e012-logs" (OuterVolumeSpecName: "logs") pod "0cad4f60-a0cf-48f8-aa76-230e58f2e012" (UID: "0cad4f60-a0cf-48f8-aa76-230e58f2e012"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.408718 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-config-data\") pod \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\" (UID: \"0cad4f60-a0cf-48f8-aa76-230e58f2e012\") " Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.409301 4980 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cad4f60-a0cf-48f8-aa76-230e58f2e012-logs\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.419766 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cad4f60-a0cf-48f8-aa76-230e58f2e012-kube-api-access-blxp7" (OuterVolumeSpecName: "kube-api-access-blxp7") pod "0cad4f60-a0cf-48f8-aa76-230e58f2e012" (UID: "0cad4f60-a0cf-48f8-aa76-230e58f2e012"). InnerVolumeSpecName "kube-api-access-blxp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.443481 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-config-data" (OuterVolumeSpecName: "config-data") pod "0cad4f60-a0cf-48f8-aa76-230e58f2e012" (UID: "0cad4f60-a0cf-48f8-aa76-230e58f2e012"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.454724 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0cad4f60-a0cf-48f8-aa76-230e58f2e012" (UID: "0cad4f60-a0cf-48f8-aa76-230e58f2e012"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.465405 4980 generic.go:334] "Generic (PLEG): container finished" podID="0cad4f60-a0cf-48f8-aa76-230e58f2e012" containerID="884473aefe37064196ba8d561a511782f3ada743372fb7a4df05a34f7b137577" exitCode=0 Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.465435 4980 generic.go:334] "Generic (PLEG): container finished" podID="0cad4f60-a0cf-48f8-aa76-230e58f2e012" containerID="ef80d4f3e38347e39e73a32aa07606f8e9638b417287a2e4eeb70614b05e8de4" exitCode=143 Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.465480 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0cad4f60-a0cf-48f8-aa76-230e58f2e012","Type":"ContainerDied","Data":"884473aefe37064196ba8d561a511782f3ada743372fb7a4df05a34f7b137577"} Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.465508 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0cad4f60-a0cf-48f8-aa76-230e58f2e012","Type":"ContainerDied","Data":"ef80d4f3e38347e39e73a32aa07606f8e9638b417287a2e4eeb70614b05e8de4"} Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.465519 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0cad4f60-a0cf-48f8-aa76-230e58f2e012","Type":"ContainerDied","Data":"da446ecb56d0fb2ce374e15a42b9cb864172947381af1b35640fe91ff278c823"} Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.465504 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.465537 4980 scope.go:117] "RemoveContainer" containerID="884473aefe37064196ba8d561a511782f3ada743372fb7a4df05a34f7b137577" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.473025 4980 generic.go:334] "Generic (PLEG): container finished" podID="51c53ece-42c6-4005-81ea-d51fac7c3c11" containerID="146e7b25472190e284b6bc42bff65a7bd6f1691c08ef1a1489305ee4df5b12dd" exitCode=143 Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.473096 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51c53ece-42c6-4005-81ea-d51fac7c3c11","Type":"ContainerDied","Data":"146e7b25472190e284b6bc42bff65a7bd6f1691c08ef1a1489305ee4df5b12dd"} Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.474685 4980 generic.go:334] "Generic (PLEG): container finished" podID="0650f850-3e0d-4d89-a785-4d34664267ef" containerID="5bc7e0c08cfc6f0e62c502576b252aa91232b6a8bc837cfbc58619f734f9926a" exitCode=0 Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.474795 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0650f850-3e0d-4d89-a785-4d34664267ef","Type":"ContainerDied","Data":"5bc7e0c08cfc6f0e62c502576b252aa91232b6a8bc837cfbc58619f734f9926a"} Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.479715 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0cad4f60-a0cf-48f8-aa76-230e58f2e012" (UID: "0cad4f60-a0cf-48f8-aa76-230e58f2e012"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.482896 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0cad4f60-a0cf-48f8-aa76-230e58f2e012" (UID: "0cad4f60-a0cf-48f8-aa76-230e58f2e012"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.496511 4980 scope.go:117] "RemoveContainer" containerID="ef80d4f3e38347e39e73a32aa07606f8e9638b417287a2e4eeb70614b05e8de4" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.498740 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.510982 4980 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.511036 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.511049 4980 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.511060 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blxp7\" (UniqueName: \"kubernetes.io/projected/0cad4f60-a0cf-48f8-aa76-230e58f2e012-kube-api-access-blxp7\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.511071 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cad4f60-a0cf-48f8-aa76-230e58f2e012-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.530087 4980 scope.go:117] "RemoveContainer" containerID="884473aefe37064196ba8d561a511782f3ada743372fb7a4df05a34f7b137577" Jan 07 03:53:47 crc kubenswrapper[4980]: E0107 03:53:47.567813 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"884473aefe37064196ba8d561a511782f3ada743372fb7a4df05a34f7b137577\": container with ID starting with 884473aefe37064196ba8d561a511782f3ada743372fb7a4df05a34f7b137577 not found: ID does not exist" containerID="884473aefe37064196ba8d561a511782f3ada743372fb7a4df05a34f7b137577" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.568519 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"884473aefe37064196ba8d561a511782f3ada743372fb7a4df05a34f7b137577"} err="failed to get container status \"884473aefe37064196ba8d561a511782f3ada743372fb7a4df05a34f7b137577\": rpc error: code = NotFound desc = could not find container \"884473aefe37064196ba8d561a511782f3ada743372fb7a4df05a34f7b137577\": container with ID starting with 884473aefe37064196ba8d561a511782f3ada743372fb7a4df05a34f7b137577 not found: ID does not exist" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.568875 4980 scope.go:117] "RemoveContainer" containerID="ef80d4f3e38347e39e73a32aa07606f8e9638b417287a2e4eeb70614b05e8de4" Jan 07 03:53:47 crc kubenswrapper[4980]: E0107 03:53:47.569250 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef80d4f3e38347e39e73a32aa07606f8e9638b417287a2e4eeb70614b05e8de4\": container with ID starting with ef80d4f3e38347e39e73a32aa07606f8e9638b417287a2e4eeb70614b05e8de4 not found: ID does not exist" containerID="ef80d4f3e38347e39e73a32aa07606f8e9638b417287a2e4eeb70614b05e8de4" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.569294 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef80d4f3e38347e39e73a32aa07606f8e9638b417287a2e4eeb70614b05e8de4"} err="failed to get container status \"ef80d4f3e38347e39e73a32aa07606f8e9638b417287a2e4eeb70614b05e8de4\": rpc error: code = NotFound desc = could not find container \"ef80d4f3e38347e39e73a32aa07606f8e9638b417287a2e4eeb70614b05e8de4\": container with ID starting with ef80d4f3e38347e39e73a32aa07606f8e9638b417287a2e4eeb70614b05e8de4 not found: ID does not exist" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.569323 4980 scope.go:117] "RemoveContainer" containerID="884473aefe37064196ba8d561a511782f3ada743372fb7a4df05a34f7b137577" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.569599 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"884473aefe37064196ba8d561a511782f3ada743372fb7a4df05a34f7b137577"} err="failed to get container status \"884473aefe37064196ba8d561a511782f3ada743372fb7a4df05a34f7b137577\": rpc error: code = NotFound desc = could not find container \"884473aefe37064196ba8d561a511782f3ada743372fb7a4df05a34f7b137577\": container with ID starting with 884473aefe37064196ba8d561a511782f3ada743372fb7a4df05a34f7b137577 not found: ID does not exist" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.569688 4980 scope.go:117] "RemoveContainer" containerID="ef80d4f3e38347e39e73a32aa07606f8e9638b417287a2e4eeb70614b05e8de4" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.570041 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef80d4f3e38347e39e73a32aa07606f8e9638b417287a2e4eeb70614b05e8de4"} err="failed to get container status \"ef80d4f3e38347e39e73a32aa07606f8e9638b417287a2e4eeb70614b05e8de4\": rpc error: code = NotFound desc = could not find container \"ef80d4f3e38347e39e73a32aa07606f8e9638b417287a2e4eeb70614b05e8de4\": container with ID starting with ef80d4f3e38347e39e73a32aa07606f8e9638b417287a2e4eeb70614b05e8de4 not found: ID does not exist" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.616617 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9pgn\" (UniqueName: \"kubernetes.io/projected/0650f850-3e0d-4d89-a785-4d34664267ef-kube-api-access-p9pgn\") pod \"0650f850-3e0d-4d89-a785-4d34664267ef\" (UID: \"0650f850-3e0d-4d89-a785-4d34664267ef\") " Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.616840 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0650f850-3e0d-4d89-a785-4d34664267ef-config-data\") pod \"0650f850-3e0d-4d89-a785-4d34664267ef\" (UID: \"0650f850-3e0d-4d89-a785-4d34664267ef\") " Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.616912 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0650f850-3e0d-4d89-a785-4d34664267ef-combined-ca-bundle\") pod \"0650f850-3e0d-4d89-a785-4d34664267ef\" (UID: \"0650f850-3e0d-4d89-a785-4d34664267ef\") " Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.623390 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0650f850-3e0d-4d89-a785-4d34664267ef-kube-api-access-p9pgn" (OuterVolumeSpecName: "kube-api-access-p9pgn") pod "0650f850-3e0d-4d89-a785-4d34664267ef" (UID: "0650f850-3e0d-4d89-a785-4d34664267ef"). InnerVolumeSpecName "kube-api-access-p9pgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.645845 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0650f850-3e0d-4d89-a785-4d34664267ef-config-data" (OuterVolumeSpecName: "config-data") pod "0650f850-3e0d-4d89-a785-4d34664267ef" (UID: "0650f850-3e0d-4d89-a785-4d34664267ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.648786 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0650f850-3e0d-4d89-a785-4d34664267ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0650f850-3e0d-4d89-a785-4d34664267ef" (UID: "0650f850-3e0d-4d89-a785-4d34664267ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.719025 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9pgn\" (UniqueName: \"kubernetes.io/projected/0650f850-3e0d-4d89-a785-4d34664267ef-kube-api-access-p9pgn\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.719075 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0650f850-3e0d-4d89-a785-4d34664267ef-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.719085 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0650f850-3e0d-4d89-a785-4d34664267ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.807906 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.820648 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.839678 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 07 03:53:47 crc kubenswrapper[4980]: E0107 03:53:47.840212 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966ffc7e-1827-4cea-b4f1-f820b4e41986" containerName="nova-manage" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.840235 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="966ffc7e-1827-4cea-b4f1-f820b4e41986" containerName="nova-manage" Jan 07 03:53:47 crc kubenswrapper[4980]: E0107 03:53:47.840254 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5532d293-9182-4446-b2db-619e1af161c4" containerName="init" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.840263 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="5532d293-9182-4446-b2db-619e1af161c4" containerName="init" Jan 07 03:53:47 crc kubenswrapper[4980]: E0107 03:53:47.840283 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0650f850-3e0d-4d89-a785-4d34664267ef" containerName="nova-scheduler-scheduler" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.840291 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="0650f850-3e0d-4d89-a785-4d34664267ef" containerName="nova-scheduler-scheduler" Jan 07 03:53:47 crc kubenswrapper[4980]: E0107 03:53:47.840311 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cad4f60-a0cf-48f8-aa76-230e58f2e012" containerName="nova-api-log" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.840320 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cad4f60-a0cf-48f8-aa76-230e58f2e012" containerName="nova-api-log" Jan 07 03:53:47 crc kubenswrapper[4980]: E0107 03:53:47.840335 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cad4f60-a0cf-48f8-aa76-230e58f2e012" containerName="nova-api-api" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.840342 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cad4f60-a0cf-48f8-aa76-230e58f2e012" containerName="nova-api-api" Jan 07 03:53:47 crc kubenswrapper[4980]: E0107 03:53:47.840352 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5532d293-9182-4446-b2db-619e1af161c4" containerName="dnsmasq-dns" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.840359 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="5532d293-9182-4446-b2db-619e1af161c4" containerName="dnsmasq-dns" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.840599 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="966ffc7e-1827-4cea-b4f1-f820b4e41986" containerName="nova-manage" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.840618 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="5532d293-9182-4446-b2db-619e1af161c4" containerName="dnsmasq-dns" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.840634 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cad4f60-a0cf-48f8-aa76-230e58f2e012" containerName="nova-api-log" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.840646 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="0650f850-3e0d-4d89-a785-4d34664267ef" containerName="nova-scheduler-scheduler" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.840658 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cad4f60-a0cf-48f8-aa76-230e58f2e012" containerName="nova-api-api" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.841974 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.844958 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.844969 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.846824 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 07 03:53:47 crc kubenswrapper[4980]: I0107 03:53:47.853224 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.026306 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/111ec39e-2b02-4d0d-89cf-9484a6399fd7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"111ec39e-2b02-4d0d-89cf-9484a6399fd7\") " pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.026840 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/111ec39e-2b02-4d0d-89cf-9484a6399fd7-public-tls-certs\") pod \"nova-api-0\" (UID: \"111ec39e-2b02-4d0d-89cf-9484a6399fd7\") " pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.026929 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/111ec39e-2b02-4d0d-89cf-9484a6399fd7-config-data\") pod \"nova-api-0\" (UID: \"111ec39e-2b02-4d0d-89cf-9484a6399fd7\") " pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.027004 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/111ec39e-2b02-4d0d-89cf-9484a6399fd7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"111ec39e-2b02-4d0d-89cf-9484a6399fd7\") " pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.027161 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/111ec39e-2b02-4d0d-89cf-9484a6399fd7-logs\") pod \"nova-api-0\" (UID: \"111ec39e-2b02-4d0d-89cf-9484a6399fd7\") " pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.027258 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-785v6\" (UniqueName: \"kubernetes.io/projected/111ec39e-2b02-4d0d-89cf-9484a6399fd7-kube-api-access-785v6\") pod \"nova-api-0\" (UID: \"111ec39e-2b02-4d0d-89cf-9484a6399fd7\") " pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.132207 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/111ec39e-2b02-4d0d-89cf-9484a6399fd7-public-tls-certs\") pod \"nova-api-0\" (UID: \"111ec39e-2b02-4d0d-89cf-9484a6399fd7\") " pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.132367 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/111ec39e-2b02-4d0d-89cf-9484a6399fd7-config-data\") pod \"nova-api-0\" (UID: \"111ec39e-2b02-4d0d-89cf-9484a6399fd7\") " pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.132444 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/111ec39e-2b02-4d0d-89cf-9484a6399fd7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"111ec39e-2b02-4d0d-89cf-9484a6399fd7\") " pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.132615 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/111ec39e-2b02-4d0d-89cf-9484a6399fd7-logs\") pod \"nova-api-0\" (UID: \"111ec39e-2b02-4d0d-89cf-9484a6399fd7\") " pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.132710 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-785v6\" (UniqueName: \"kubernetes.io/projected/111ec39e-2b02-4d0d-89cf-9484a6399fd7-kube-api-access-785v6\") pod \"nova-api-0\" (UID: \"111ec39e-2b02-4d0d-89cf-9484a6399fd7\") " pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.132785 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/111ec39e-2b02-4d0d-89cf-9484a6399fd7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"111ec39e-2b02-4d0d-89cf-9484a6399fd7\") " pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.133881 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/111ec39e-2b02-4d0d-89cf-9484a6399fd7-logs\") pod \"nova-api-0\" (UID: \"111ec39e-2b02-4d0d-89cf-9484a6399fd7\") " pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.140174 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/111ec39e-2b02-4d0d-89cf-9484a6399fd7-config-data\") pod \"nova-api-0\" (UID: \"111ec39e-2b02-4d0d-89cf-9484a6399fd7\") " pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.143604 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/111ec39e-2b02-4d0d-89cf-9484a6399fd7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"111ec39e-2b02-4d0d-89cf-9484a6399fd7\") " pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.143718 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/111ec39e-2b02-4d0d-89cf-9484a6399fd7-public-tls-certs\") pod \"nova-api-0\" (UID: \"111ec39e-2b02-4d0d-89cf-9484a6399fd7\") " pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.161039 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/111ec39e-2b02-4d0d-89cf-9484a6399fd7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"111ec39e-2b02-4d0d-89cf-9484a6399fd7\") " pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.170866 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-785v6\" (UniqueName: \"kubernetes.io/projected/111ec39e-2b02-4d0d-89cf-9484a6399fd7-kube-api-access-785v6\") pod \"nova-api-0\" (UID: \"111ec39e-2b02-4d0d-89cf-9484a6399fd7\") " pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.202413 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.489400 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0650f850-3e0d-4d89-a785-4d34664267ef","Type":"ContainerDied","Data":"c12c186ee1898f70d0a0fb7be99e133cc8e2c872a0d539202b1cec87c8f133ab"} Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.489464 4980 scope.go:117] "RemoveContainer" containerID="5bc7e0c08cfc6f0e62c502576b252aa91232b6a8bc837cfbc58619f734f9926a" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.489495 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.536687 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.548385 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.561054 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.562827 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.568284 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.585067 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.748694 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62709b59-f907-4b1f-b0a4-bab71ce12d86-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"62709b59-f907-4b1f-b0a4-bab71ce12d86\") " pod="openstack/nova-scheduler-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.748773 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv86v\" (UniqueName: \"kubernetes.io/projected/62709b59-f907-4b1f-b0a4-bab71ce12d86-kube-api-access-tv86v\") pod \"nova-scheduler-0\" (UID: \"62709b59-f907-4b1f-b0a4-bab71ce12d86\") " pod="openstack/nova-scheduler-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.748805 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62709b59-f907-4b1f-b0a4-bab71ce12d86-config-data\") pod \"nova-scheduler-0\" (UID: \"62709b59-f907-4b1f-b0a4-bab71ce12d86\") " pod="openstack/nova-scheduler-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.775656 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.850276 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62709b59-f907-4b1f-b0a4-bab71ce12d86-config-data\") pod \"nova-scheduler-0\" (UID: \"62709b59-f907-4b1f-b0a4-bab71ce12d86\") " pod="openstack/nova-scheduler-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.850888 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62709b59-f907-4b1f-b0a4-bab71ce12d86-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"62709b59-f907-4b1f-b0a4-bab71ce12d86\") " pod="openstack/nova-scheduler-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.851059 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv86v\" (UniqueName: \"kubernetes.io/projected/62709b59-f907-4b1f-b0a4-bab71ce12d86-kube-api-access-tv86v\") pod \"nova-scheduler-0\" (UID: \"62709b59-f907-4b1f-b0a4-bab71ce12d86\") " pod="openstack/nova-scheduler-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.858271 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62709b59-f907-4b1f-b0a4-bab71ce12d86-config-data\") pod \"nova-scheduler-0\" (UID: \"62709b59-f907-4b1f-b0a4-bab71ce12d86\") " pod="openstack/nova-scheduler-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.858984 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62709b59-f907-4b1f-b0a4-bab71ce12d86-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"62709b59-f907-4b1f-b0a4-bab71ce12d86\") " pod="openstack/nova-scheduler-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.872026 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv86v\" (UniqueName: \"kubernetes.io/projected/62709b59-f907-4b1f-b0a4-bab71ce12d86-kube-api-access-tv86v\") pod \"nova-scheduler-0\" (UID: \"62709b59-f907-4b1f-b0a4-bab71ce12d86\") " pod="openstack/nova-scheduler-0" Jan 07 03:53:48 crc kubenswrapper[4980]: I0107 03:53:48.881374 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 07 03:53:49 crc kubenswrapper[4980]: I0107 03:53:49.387678 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 07 03:53:49 crc kubenswrapper[4980]: W0107 03:53:49.390583 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62709b59_f907_4b1f_b0a4_bab71ce12d86.slice/crio-4f0d705369431ef74a006fc353dc9897b00a3901cd70debb40ec62711ac5d35f WatchSource:0}: Error finding container 4f0d705369431ef74a006fc353dc9897b00a3901cd70debb40ec62711ac5d35f: Status 404 returned error can't find the container with id 4f0d705369431ef74a006fc353dc9897b00a3901cd70debb40ec62711ac5d35f Jan 07 03:53:49 crc kubenswrapper[4980]: I0107 03:53:49.505664 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"62709b59-f907-4b1f-b0a4-bab71ce12d86","Type":"ContainerStarted","Data":"4f0d705369431ef74a006fc353dc9897b00a3901cd70debb40ec62711ac5d35f"} Jan 07 03:53:49 crc kubenswrapper[4980]: I0107 03:53:49.510806 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"111ec39e-2b02-4d0d-89cf-9484a6399fd7","Type":"ContainerStarted","Data":"3bf7c013e8788e229e216158cdfb6c04f8052da689979bed7c2c3a777398d876"} Jan 07 03:53:49 crc kubenswrapper[4980]: I0107 03:53:49.510903 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"111ec39e-2b02-4d0d-89cf-9484a6399fd7","Type":"ContainerStarted","Data":"a528d85c91f079eedf1a845da66ebb64510b7896042a893d397fb7379052092b"} Jan 07 03:53:49 crc kubenswrapper[4980]: I0107 03:53:49.510925 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"111ec39e-2b02-4d0d-89cf-9484a6399fd7","Type":"ContainerStarted","Data":"e954726977a4e2059babd7c2ad2a1895929e04af95e9ac0f38cd3f61fc20dd80"} Jan 07 03:53:49 crc kubenswrapper[4980]: I0107 03:53:49.541685 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5415456130000003 podStartE2EDuration="2.541545613s" podCreationTimestamp="2026-01-07 03:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:53:49.534514164 +0000 UTC m=+1276.100208909" watchObservedRunningTime="2026-01-07 03:53:49.541545613 +0000 UTC m=+1276.107240368" Jan 07 03:53:49 crc kubenswrapper[4980]: I0107 03:53:49.751249 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0650f850-3e0d-4d89-a785-4d34664267ef" path="/var/lib/kubelet/pods/0650f850-3e0d-4d89-a785-4d34664267ef/volumes" Jan 07 03:53:49 crc kubenswrapper[4980]: I0107 03:53:49.751818 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cad4f60-a0cf-48f8-aa76-230e58f2e012" path="/var/lib/kubelet/pods/0cad4f60-a0cf-48f8-aa76-230e58f2e012/volumes" Jan 07 03:53:49 crc kubenswrapper[4980]: I0107 03:53:49.817911 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="51c53ece-42c6-4005-81ea-d51fac7c3c11" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": read tcp 10.217.0.2:36352->10.217.0.192:8775: read: connection reset by peer" Jan 07 03:53:49 crc kubenswrapper[4980]: I0107 03:53:49.818910 4980 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="51c53ece-42c6-4005-81ea-d51fac7c3c11" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": read tcp 10.217.0.2:36340->10.217.0.192:8775: read: connection reset by peer" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.386442 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.490958 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c53ece-42c6-4005-81ea-d51fac7c3c11-combined-ca-bundle\") pod \"51c53ece-42c6-4005-81ea-d51fac7c3c11\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.491036 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/51c53ece-42c6-4005-81ea-d51fac7c3c11-nova-metadata-tls-certs\") pod \"51c53ece-42c6-4005-81ea-d51fac7c3c11\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.491174 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c53ece-42c6-4005-81ea-d51fac7c3c11-config-data\") pod \"51c53ece-42c6-4005-81ea-d51fac7c3c11\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.491211 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hslj5\" (UniqueName: \"kubernetes.io/projected/51c53ece-42c6-4005-81ea-d51fac7c3c11-kube-api-access-hslj5\") pod \"51c53ece-42c6-4005-81ea-d51fac7c3c11\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.491257 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c53ece-42c6-4005-81ea-d51fac7c3c11-logs\") pod \"51c53ece-42c6-4005-81ea-d51fac7c3c11\" (UID: \"51c53ece-42c6-4005-81ea-d51fac7c3c11\") " Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.492205 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51c53ece-42c6-4005-81ea-d51fac7c3c11-logs" (OuterVolumeSpecName: "logs") pod "51c53ece-42c6-4005-81ea-d51fac7c3c11" (UID: "51c53ece-42c6-4005-81ea-d51fac7c3c11"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.498588 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c53ece-42c6-4005-81ea-d51fac7c3c11-kube-api-access-hslj5" (OuterVolumeSpecName: "kube-api-access-hslj5") pod "51c53ece-42c6-4005-81ea-d51fac7c3c11" (UID: "51c53ece-42c6-4005-81ea-d51fac7c3c11"). InnerVolumeSpecName "kube-api-access-hslj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.525318 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c53ece-42c6-4005-81ea-d51fac7c3c11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51c53ece-42c6-4005-81ea-d51fac7c3c11" (UID: "51c53ece-42c6-4005-81ea-d51fac7c3c11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.534250 4980 generic.go:334] "Generic (PLEG): container finished" podID="51c53ece-42c6-4005-81ea-d51fac7c3c11" containerID="f84488728aca055259046af054255ce8a45b35a60906100735ff697a8ea28d49" exitCode=0 Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.534354 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51c53ece-42c6-4005-81ea-d51fac7c3c11","Type":"ContainerDied","Data":"f84488728aca055259046af054255ce8a45b35a60906100735ff697a8ea28d49"} Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.534403 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.534619 4980 scope.go:117] "RemoveContainer" containerID="f84488728aca055259046af054255ce8a45b35a60906100735ff697a8ea28d49" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.534597 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51c53ece-42c6-4005-81ea-d51fac7c3c11","Type":"ContainerDied","Data":"27d4625816783f4e6c1da9a49124095ef013fe8dc0212710287811f42725d86f"} Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.539222 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"62709b59-f907-4b1f-b0a4-bab71ce12d86","Type":"ContainerStarted","Data":"28796d050dfa9729a95688d68984ce1a915f0f209f7d775f9b86c1d23a56a0ce"} Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.542167 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c53ece-42c6-4005-81ea-d51fac7c3c11-config-data" (OuterVolumeSpecName: "config-data") pod "51c53ece-42c6-4005-81ea-d51fac7c3c11" (UID: "51c53ece-42c6-4005-81ea-d51fac7c3c11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.567120 4980 scope.go:117] "RemoveContainer" containerID="146e7b25472190e284b6bc42bff65a7bd6f1691c08ef1a1489305ee4df5b12dd" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.571410 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.571384167 podStartE2EDuration="2.571384167s" podCreationTimestamp="2026-01-07 03:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:53:50.560956841 +0000 UTC m=+1277.126651576" watchObservedRunningTime="2026-01-07 03:53:50.571384167 +0000 UTC m=+1277.137078942" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.573909 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c53ece-42c6-4005-81ea-d51fac7c3c11-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "51c53ece-42c6-4005-81ea-d51fac7c3c11" (UID: "51c53ece-42c6-4005-81ea-d51fac7c3c11"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.593597 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c53ece-42c6-4005-81ea-d51fac7c3c11-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.593632 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hslj5\" (UniqueName: \"kubernetes.io/projected/51c53ece-42c6-4005-81ea-d51fac7c3c11-kube-api-access-hslj5\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.593644 4980 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c53ece-42c6-4005-81ea-d51fac7c3c11-logs\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.593652 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c53ece-42c6-4005-81ea-d51fac7c3c11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.593662 4980 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/51c53ece-42c6-4005-81ea-d51fac7c3c11-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.593672 4980 scope.go:117] "RemoveContainer" containerID="f84488728aca055259046af054255ce8a45b35a60906100735ff697a8ea28d49" Jan 07 03:53:50 crc kubenswrapper[4980]: E0107 03:53:50.594049 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f84488728aca055259046af054255ce8a45b35a60906100735ff697a8ea28d49\": container with ID starting with f84488728aca055259046af054255ce8a45b35a60906100735ff697a8ea28d49 not found: ID does not exist" containerID="f84488728aca055259046af054255ce8a45b35a60906100735ff697a8ea28d49" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.594145 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f84488728aca055259046af054255ce8a45b35a60906100735ff697a8ea28d49"} err="failed to get container status \"f84488728aca055259046af054255ce8a45b35a60906100735ff697a8ea28d49\": rpc error: code = NotFound desc = could not find container \"f84488728aca055259046af054255ce8a45b35a60906100735ff697a8ea28d49\": container with ID starting with f84488728aca055259046af054255ce8a45b35a60906100735ff697a8ea28d49 not found: ID does not exist" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.594227 4980 scope.go:117] "RemoveContainer" containerID="146e7b25472190e284b6bc42bff65a7bd6f1691c08ef1a1489305ee4df5b12dd" Jan 07 03:53:50 crc kubenswrapper[4980]: E0107 03:53:50.594739 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"146e7b25472190e284b6bc42bff65a7bd6f1691c08ef1a1489305ee4df5b12dd\": container with ID starting with 146e7b25472190e284b6bc42bff65a7bd6f1691c08ef1a1489305ee4df5b12dd not found: ID does not exist" containerID="146e7b25472190e284b6bc42bff65a7bd6f1691c08ef1a1489305ee4df5b12dd" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.594803 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"146e7b25472190e284b6bc42bff65a7bd6f1691c08ef1a1489305ee4df5b12dd"} err="failed to get container status \"146e7b25472190e284b6bc42bff65a7bd6f1691c08ef1a1489305ee4df5b12dd\": rpc error: code = NotFound desc = could not find container \"146e7b25472190e284b6bc42bff65a7bd6f1691c08ef1a1489305ee4df5b12dd\": container with ID starting with 146e7b25472190e284b6bc42bff65a7bd6f1691c08ef1a1489305ee4df5b12dd not found: ID does not exist" Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.956183 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:53:50 crc kubenswrapper[4980]: I0107 03:53:50.968356 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.012917 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:53:51 crc kubenswrapper[4980]: E0107 03:53:51.013976 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c53ece-42c6-4005-81ea-d51fac7c3c11" containerName="nova-metadata-metadata" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.014007 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c53ece-42c6-4005-81ea-d51fac7c3c11" containerName="nova-metadata-metadata" Jan 07 03:53:51 crc kubenswrapper[4980]: E0107 03:53:51.014055 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c53ece-42c6-4005-81ea-d51fac7c3c11" containerName="nova-metadata-log" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.014064 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c53ece-42c6-4005-81ea-d51fac7c3c11" containerName="nova-metadata-log" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.014295 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c53ece-42c6-4005-81ea-d51fac7c3c11" containerName="nova-metadata-log" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.014315 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c53ece-42c6-4005-81ea-d51fac7c3c11" containerName="nova-metadata-metadata" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.015884 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.018438 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.020513 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.039545 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.212249 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7266195-f7f5-40e2-9c60-97a0d6684272-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c7266195-f7f5-40e2-9c60-97a0d6684272\") " pod="openstack/nova-metadata-0" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.212341 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq97h\" (UniqueName: \"kubernetes.io/projected/c7266195-f7f5-40e2-9c60-97a0d6684272-kube-api-access-fq97h\") pod \"nova-metadata-0\" (UID: \"c7266195-f7f5-40e2-9c60-97a0d6684272\") " pod="openstack/nova-metadata-0" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.212406 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7266195-f7f5-40e2-9c60-97a0d6684272-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c7266195-f7f5-40e2-9c60-97a0d6684272\") " pod="openstack/nova-metadata-0" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.213505 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7266195-f7f5-40e2-9c60-97a0d6684272-config-data\") pod \"nova-metadata-0\" (UID: \"c7266195-f7f5-40e2-9c60-97a0d6684272\") " pod="openstack/nova-metadata-0" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.213757 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7266195-f7f5-40e2-9c60-97a0d6684272-logs\") pod \"nova-metadata-0\" (UID: \"c7266195-f7f5-40e2-9c60-97a0d6684272\") " pod="openstack/nova-metadata-0" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.315516 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7266195-f7f5-40e2-9c60-97a0d6684272-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c7266195-f7f5-40e2-9c60-97a0d6684272\") " pod="openstack/nova-metadata-0" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.315673 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq97h\" (UniqueName: \"kubernetes.io/projected/c7266195-f7f5-40e2-9c60-97a0d6684272-kube-api-access-fq97h\") pod \"nova-metadata-0\" (UID: \"c7266195-f7f5-40e2-9c60-97a0d6684272\") " pod="openstack/nova-metadata-0" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.315771 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7266195-f7f5-40e2-9c60-97a0d6684272-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c7266195-f7f5-40e2-9c60-97a0d6684272\") " pod="openstack/nova-metadata-0" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.315852 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7266195-f7f5-40e2-9c60-97a0d6684272-config-data\") pod \"nova-metadata-0\" (UID: \"c7266195-f7f5-40e2-9c60-97a0d6684272\") " pod="openstack/nova-metadata-0" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.315962 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7266195-f7f5-40e2-9c60-97a0d6684272-logs\") pod \"nova-metadata-0\" (UID: \"c7266195-f7f5-40e2-9c60-97a0d6684272\") " pod="openstack/nova-metadata-0" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.316655 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7266195-f7f5-40e2-9c60-97a0d6684272-logs\") pod \"nova-metadata-0\" (UID: \"c7266195-f7f5-40e2-9c60-97a0d6684272\") " pod="openstack/nova-metadata-0" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.322932 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7266195-f7f5-40e2-9c60-97a0d6684272-config-data\") pod \"nova-metadata-0\" (UID: \"c7266195-f7f5-40e2-9c60-97a0d6684272\") " pod="openstack/nova-metadata-0" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.323011 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7266195-f7f5-40e2-9c60-97a0d6684272-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c7266195-f7f5-40e2-9c60-97a0d6684272\") " pod="openstack/nova-metadata-0" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.327506 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7266195-f7f5-40e2-9c60-97a0d6684272-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c7266195-f7f5-40e2-9c60-97a0d6684272\") " pod="openstack/nova-metadata-0" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.351689 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq97h\" (UniqueName: \"kubernetes.io/projected/c7266195-f7f5-40e2-9c60-97a0d6684272-kube-api-access-fq97h\") pod \"nova-metadata-0\" (UID: \"c7266195-f7f5-40e2-9c60-97a0d6684272\") " pod="openstack/nova-metadata-0" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.652620 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 07 03:53:51 crc kubenswrapper[4980]: I0107 03:53:51.760621 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51c53ece-42c6-4005-81ea-d51fac7c3c11" path="/var/lib/kubelet/pods/51c53ece-42c6-4005-81ea-d51fac7c3c11/volumes" Jan 07 03:53:52 crc kubenswrapper[4980]: I0107 03:53:52.234485 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 07 03:53:52 crc kubenswrapper[4980]: I0107 03:53:52.563374 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7266195-f7f5-40e2-9c60-97a0d6684272","Type":"ContainerStarted","Data":"f2ca43b93375ee58290ef4a631f0b437e275f37bbc68425e56e07538731b8c30"} Jan 07 03:53:52 crc kubenswrapper[4980]: I0107 03:53:52.563428 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7266195-f7f5-40e2-9c60-97a0d6684272","Type":"ContainerStarted","Data":"e0c175298d7c6ef5e86e2bc9dc99e06df4ac181db296432730126c41d3dfd500"} Jan 07 03:53:53 crc kubenswrapper[4980]: I0107 03:53:53.584656 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7266195-f7f5-40e2-9c60-97a0d6684272","Type":"ContainerStarted","Data":"b486868a62ad3670f80d31dce222ea35b7463c90d124e4ac4dabb33ffcdda7ea"} Jan 07 03:53:53 crc kubenswrapper[4980]: I0107 03:53:53.622447 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.6224169980000003 podStartE2EDuration="3.622416998s" podCreationTimestamp="2026-01-07 03:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:53:53.609949659 +0000 UTC m=+1280.175644434" watchObservedRunningTime="2026-01-07 03:53:53.622416998 +0000 UTC m=+1280.188111773" Jan 07 03:53:53 crc kubenswrapper[4980]: I0107 03:53:53.882044 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 07 03:53:56 crc kubenswrapper[4980]: I0107 03:53:56.652798 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 07 03:53:56 crc kubenswrapper[4980]: I0107 03:53:56.653136 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 07 03:53:58 crc kubenswrapper[4980]: I0107 03:53:58.203615 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 07 03:53:58 crc kubenswrapper[4980]: I0107 03:53:58.205388 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 07 03:53:58 crc kubenswrapper[4980]: I0107 03:53:58.881616 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 07 03:53:58 crc kubenswrapper[4980]: I0107 03:53:58.923624 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 07 03:53:59 crc kubenswrapper[4980]: I0107 03:53:59.225826 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="111ec39e-2b02-4d0d-89cf-9484a6399fd7" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 07 03:53:59 crc kubenswrapper[4980]: I0107 03:53:59.225854 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="111ec39e-2b02-4d0d-89cf-9484a6399fd7" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 07 03:53:59 crc kubenswrapper[4980]: I0107 03:53:59.726637 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 07 03:54:01 crc kubenswrapper[4980]: I0107 03:54:01.653800 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 07 03:54:01 crc kubenswrapper[4980]: I0107 03:54:01.654400 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 07 03:54:02 crc kubenswrapper[4980]: I0107 03:54:02.674900 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c7266195-f7f5-40e2-9c60-97a0d6684272" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 07 03:54:02 crc kubenswrapper[4980]: I0107 03:54:02.674860 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c7266195-f7f5-40e2-9c60-97a0d6684272" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 07 03:54:03 crc kubenswrapper[4980]: I0107 03:54:03.682908 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 07 03:54:06 crc kubenswrapper[4980]: I0107 03:54:06.542702 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:54:06 crc kubenswrapper[4980]: I0107 03:54:06.543175 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:54:08 crc kubenswrapper[4980]: I0107 03:54:08.215480 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 07 03:54:08 crc kubenswrapper[4980]: I0107 03:54:08.216871 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 07 03:54:08 crc kubenswrapper[4980]: I0107 03:54:08.222487 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 07 03:54:08 crc kubenswrapper[4980]: I0107 03:54:08.229896 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 07 03:54:08 crc kubenswrapper[4980]: I0107 03:54:08.795091 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 07 03:54:08 crc kubenswrapper[4980]: I0107 03:54:08.804819 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 07 03:54:11 crc kubenswrapper[4980]: I0107 03:54:11.661120 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 07 03:54:11 crc kubenswrapper[4980]: I0107 03:54:11.662855 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 07 03:54:11 crc kubenswrapper[4980]: I0107 03:54:11.670784 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 07 03:54:11 crc kubenswrapper[4980]: I0107 03:54:11.862747 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 07 03:54:20 crc kubenswrapper[4980]: I0107 03:54:20.232956 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 07 03:54:21 crc kubenswrapper[4980]: I0107 03:54:21.225922 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 07 03:54:24 crc kubenswrapper[4980]: I0107 03:54:24.374475 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="6714f510-9927-47da-bc8b-3e4a3995cdc6" containerName="rabbitmq" containerID="cri-o://5e36cba9c6cdfc2b8d1c15aa812ced9691350acef7022fe9c126099890c5a3a4" gracePeriod=604796 Jan 07 03:54:26 crc kubenswrapper[4980]: I0107 03:54:26.026226 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="26440bb2-233e-47e3-bb46-9122523bce68" containerName="rabbitmq" containerID="cri-o://882ee96934569ce83ce2b5c1e81a5acd141e7f0951d1e7057dc1a1740f34722b" gracePeriod=604796 Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.063344 4980 generic.go:334] "Generic (PLEG): container finished" podID="6714f510-9927-47da-bc8b-3e4a3995cdc6" containerID="5e36cba9c6cdfc2b8d1c15aa812ced9691350acef7022fe9c126099890c5a3a4" exitCode=0 Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.063480 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6714f510-9927-47da-bc8b-3e4a3995cdc6","Type":"ContainerDied","Data":"5e36cba9c6cdfc2b8d1c15aa812ced9691350acef7022fe9c126099890c5a3a4"} Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.064260 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6714f510-9927-47da-bc8b-3e4a3995cdc6","Type":"ContainerDied","Data":"bb718c083108e3eea538680a1e5b66e6e69c78c280d9bb547747622a1fc8dcef"} Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.064296 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb718c083108e3eea538680a1e5b66e6e69c78c280d9bb547747622a1fc8dcef" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.068852 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.073130 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbhhh\" (UniqueName: \"kubernetes.io/projected/6714f510-9927-47da-bc8b-3e4a3995cdc6-kube-api-access-zbhhh\") pod \"6714f510-9927-47da-bc8b-3e4a3995cdc6\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.073244 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-tls\") pod \"6714f510-9927-47da-bc8b-3e4a3995cdc6\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.073314 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"6714f510-9927-47da-bc8b-3e4a3995cdc6\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.073414 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6714f510-9927-47da-bc8b-3e4a3995cdc6-plugins-conf\") pod \"6714f510-9927-47da-bc8b-3e4a3995cdc6\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.073655 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6714f510-9927-47da-bc8b-3e4a3995cdc6-server-conf\") pod \"6714f510-9927-47da-bc8b-3e4a3995cdc6\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.073728 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6714f510-9927-47da-bc8b-3e4a3995cdc6-erlang-cookie-secret\") pod \"6714f510-9927-47da-bc8b-3e4a3995cdc6\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.073824 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-erlang-cookie\") pod \"6714f510-9927-47da-bc8b-3e4a3995cdc6\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.073886 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-confd\") pod \"6714f510-9927-47da-bc8b-3e4a3995cdc6\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.073937 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-plugins\") pod \"6714f510-9927-47da-bc8b-3e4a3995cdc6\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.074025 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6714f510-9927-47da-bc8b-3e4a3995cdc6-config-data\") pod \"6714f510-9927-47da-bc8b-3e4a3995cdc6\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.074112 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6714f510-9927-47da-bc8b-3e4a3995cdc6-pod-info\") pod \"6714f510-9927-47da-bc8b-3e4a3995cdc6\" (UID: \"6714f510-9927-47da-bc8b-3e4a3995cdc6\") " Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.076087 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6714f510-9927-47da-bc8b-3e4a3995cdc6" (UID: "6714f510-9927-47da-bc8b-3e4a3995cdc6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.076433 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6714f510-9927-47da-bc8b-3e4a3995cdc6" (UID: "6714f510-9927-47da-bc8b-3e4a3995cdc6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.076478 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6714f510-9927-47da-bc8b-3e4a3995cdc6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6714f510-9927-47da-bc8b-3e4a3995cdc6" (UID: "6714f510-9927-47da-bc8b-3e4a3995cdc6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.083807 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "6714f510-9927-47da-bc8b-3e4a3995cdc6" (UID: "6714f510-9927-47da-bc8b-3e4a3995cdc6"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.084054 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6714f510-9927-47da-bc8b-3e4a3995cdc6-pod-info" (OuterVolumeSpecName: "pod-info") pod "6714f510-9927-47da-bc8b-3e4a3995cdc6" (UID: "6714f510-9927-47da-bc8b-3e4a3995cdc6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.084433 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6714f510-9927-47da-bc8b-3e4a3995cdc6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6714f510-9927-47da-bc8b-3e4a3995cdc6" (UID: "6714f510-9927-47da-bc8b-3e4a3995cdc6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.087302 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6714f510-9927-47da-bc8b-3e4a3995cdc6" (UID: "6714f510-9927-47da-bc8b-3e4a3995cdc6"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.087506 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6714f510-9927-47da-bc8b-3e4a3995cdc6-kube-api-access-zbhhh" (OuterVolumeSpecName: "kube-api-access-zbhhh") pod "6714f510-9927-47da-bc8b-3e4a3995cdc6" (UID: "6714f510-9927-47da-bc8b-3e4a3995cdc6"). InnerVolumeSpecName "kube-api-access-zbhhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.163073 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6714f510-9927-47da-bc8b-3e4a3995cdc6-config-data" (OuterVolumeSpecName: "config-data") pod "6714f510-9927-47da-bc8b-3e4a3995cdc6" (UID: "6714f510-9927-47da-bc8b-3e4a3995cdc6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.178642 4980 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6714f510-9927-47da-bc8b-3e4a3995cdc6-pod-info\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.178679 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbhhh\" (UniqueName: \"kubernetes.io/projected/6714f510-9927-47da-bc8b-3e4a3995cdc6-kube-api-access-zbhhh\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.178694 4980 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.178724 4980 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.178736 4980 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6714f510-9927-47da-bc8b-3e4a3995cdc6-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.178746 4980 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6714f510-9927-47da-bc8b-3e4a3995cdc6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.178756 4980 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.178778 4980 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.178789 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6714f510-9927-47da-bc8b-3e4a3995cdc6-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.188019 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6714f510-9927-47da-bc8b-3e4a3995cdc6-server-conf" (OuterVolumeSpecName: "server-conf") pod "6714f510-9927-47da-bc8b-3e4a3995cdc6" (UID: "6714f510-9927-47da-bc8b-3e4a3995cdc6"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.225569 4980 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.253100 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6714f510-9927-47da-bc8b-3e4a3995cdc6" (UID: "6714f510-9927-47da-bc8b-3e4a3995cdc6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.281023 4980 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6714f510-9927-47da-bc8b-3e4a3995cdc6-server-conf\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.281061 4980 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6714f510-9927-47da-bc8b-3e4a3995cdc6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:31 crc kubenswrapper[4980]: I0107 03:54:31.281075 4980 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.077498 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.114141 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.121593 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.186170 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 07 03:54:32 crc kubenswrapper[4980]: E0107 03:54:32.186733 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6714f510-9927-47da-bc8b-3e4a3995cdc6" containerName="rabbitmq" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.186757 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="6714f510-9927-47da-bc8b-3e4a3995cdc6" containerName="rabbitmq" Jan 07 03:54:32 crc kubenswrapper[4980]: E0107 03:54:32.186771 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6714f510-9927-47da-bc8b-3e4a3995cdc6" containerName="setup-container" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.186779 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="6714f510-9927-47da-bc8b-3e4a3995cdc6" containerName="setup-container" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.187009 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="6714f510-9927-47da-bc8b-3e4a3995cdc6" containerName="rabbitmq" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.188256 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.191435 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6cc44" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.191642 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.192000 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.192645 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.197244 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.197475 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.197664 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.225372 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.303185 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.303279 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.303338 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.303363 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94fkp\" (UniqueName: \"kubernetes.io/projected/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-kube-api-access-94fkp\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.303385 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.303430 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.303531 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.303667 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.303708 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-config-data\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.303725 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.303910 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.408494 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.408825 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.408873 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94fkp\" (UniqueName: \"kubernetes.io/projected/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-kube-api-access-94fkp\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.408895 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.408941 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.408962 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.408982 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.409003 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-config-data\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.409019 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.409060 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.409087 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.409523 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.410471 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.411019 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.411526 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-config-data\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.414056 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.414395 4980 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.439529 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.440133 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.440369 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.445368 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.474530 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94fkp\" (UniqueName: \"kubernetes.io/projected/837d407a-b0ff-4fec-8c21-e30b95cd3d7b-kube-api-access-94fkp\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.482638 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"837d407a-b0ff-4fec-8c21-e30b95cd3d7b\") " pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.594547 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.692608 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.815207 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26440bb2-233e-47e3-bb46-9122523bce68-config-data\") pod \"26440bb2-233e-47e3-bb46-9122523bce68\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.815274 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-plugins\") pod \"26440bb2-233e-47e3-bb46-9122523bce68\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.815361 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-confd\") pod \"26440bb2-233e-47e3-bb46-9122523bce68\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.815437 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"26440bb2-233e-47e3-bb46-9122523bce68\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.815470 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxbsc\" (UniqueName: \"kubernetes.io/projected/26440bb2-233e-47e3-bb46-9122523bce68-kube-api-access-pxbsc\") pod \"26440bb2-233e-47e3-bb46-9122523bce68\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.815497 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-erlang-cookie\") pod \"26440bb2-233e-47e3-bb46-9122523bce68\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.815525 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26440bb2-233e-47e3-bb46-9122523bce68-plugins-conf\") pod \"26440bb2-233e-47e3-bb46-9122523bce68\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.815540 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-tls\") pod \"26440bb2-233e-47e3-bb46-9122523bce68\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.815585 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26440bb2-233e-47e3-bb46-9122523bce68-erlang-cookie-secret\") pod \"26440bb2-233e-47e3-bb46-9122523bce68\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.815618 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26440bb2-233e-47e3-bb46-9122523bce68-pod-info\") pod \"26440bb2-233e-47e3-bb46-9122523bce68\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.815648 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26440bb2-233e-47e3-bb46-9122523bce68-server-conf\") pod \"26440bb2-233e-47e3-bb46-9122523bce68\" (UID: \"26440bb2-233e-47e3-bb46-9122523bce68\") " Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.821382 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "26440bb2-233e-47e3-bb46-9122523bce68" (UID: "26440bb2-233e-47e3-bb46-9122523bce68"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.821778 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26440bb2-233e-47e3-bb46-9122523bce68-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "26440bb2-233e-47e3-bb46-9122523bce68" (UID: "26440bb2-233e-47e3-bb46-9122523bce68"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.823749 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "26440bb2-233e-47e3-bb46-9122523bce68" (UID: "26440bb2-233e-47e3-bb46-9122523bce68"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.825742 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "26440bb2-233e-47e3-bb46-9122523bce68" (UID: "26440bb2-233e-47e3-bb46-9122523bce68"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.826380 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26440bb2-233e-47e3-bb46-9122523bce68-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "26440bb2-233e-47e3-bb46-9122523bce68" (UID: "26440bb2-233e-47e3-bb46-9122523bce68"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.828014 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/26440bb2-233e-47e3-bb46-9122523bce68-pod-info" (OuterVolumeSpecName: "pod-info") pod "26440bb2-233e-47e3-bb46-9122523bce68" (UID: "26440bb2-233e-47e3-bb46-9122523bce68"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.833953 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "26440bb2-233e-47e3-bb46-9122523bce68" (UID: "26440bb2-233e-47e3-bb46-9122523bce68"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.841932 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26440bb2-233e-47e3-bb46-9122523bce68-kube-api-access-pxbsc" (OuterVolumeSpecName: "kube-api-access-pxbsc") pod "26440bb2-233e-47e3-bb46-9122523bce68" (UID: "26440bb2-233e-47e3-bb46-9122523bce68"). InnerVolumeSpecName "kube-api-access-pxbsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.864644 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26440bb2-233e-47e3-bb46-9122523bce68-config-data" (OuterVolumeSpecName: "config-data") pod "26440bb2-233e-47e3-bb46-9122523bce68" (UID: "26440bb2-233e-47e3-bb46-9122523bce68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.887061 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26440bb2-233e-47e3-bb46-9122523bce68-server-conf" (OuterVolumeSpecName: "server-conf") pod "26440bb2-233e-47e3-bb46-9122523bce68" (UID: "26440bb2-233e-47e3-bb46-9122523bce68"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.918118 4980 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.918151 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxbsc\" (UniqueName: \"kubernetes.io/projected/26440bb2-233e-47e3-bb46-9122523bce68-kube-api-access-pxbsc\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.918162 4980 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.918171 4980 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26440bb2-233e-47e3-bb46-9122523bce68-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.918181 4980 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.918190 4980 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26440bb2-233e-47e3-bb46-9122523bce68-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.918198 4980 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26440bb2-233e-47e3-bb46-9122523bce68-pod-info\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.918205 4980 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26440bb2-233e-47e3-bb46-9122523bce68-server-conf\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.918213 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26440bb2-233e-47e3-bb46-9122523bce68-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.918222 4980 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.931255 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "26440bb2-233e-47e3-bb46-9122523bce68" (UID: "26440bb2-233e-47e3-bb46-9122523bce68"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:54:32 crc kubenswrapper[4980]: I0107 03:54:32.945248 4980 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.019743 4980 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26440bb2-233e-47e3-bb46-9122523bce68-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.019785 4980 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.106489 4980 generic.go:334] "Generic (PLEG): container finished" podID="26440bb2-233e-47e3-bb46-9122523bce68" containerID="882ee96934569ce83ce2b5c1e81a5acd141e7f0951d1e7057dc1a1740f34722b" exitCode=0 Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.106607 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"26440bb2-233e-47e3-bb46-9122523bce68","Type":"ContainerDied","Data":"882ee96934569ce83ce2b5c1e81a5acd141e7f0951d1e7057dc1a1740f34722b"} Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.106668 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"26440bb2-233e-47e3-bb46-9122523bce68","Type":"ContainerDied","Data":"308772a2d63acb0eebd2588e53d0393d5cebdb5865a62d625e6c843906b35a25"} Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.106688 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.106715 4980 scope.go:117] "RemoveContainer" containerID="882ee96934569ce83ce2b5c1e81a5acd141e7f0951d1e7057dc1a1740f34722b" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.120350 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.130013 4980 scope.go:117] "RemoveContainer" containerID="16a10cc63fe3737551a1d1c1d38097ad76d70f15551bca8e701414288ead90f6" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.152665 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.161996 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.170200 4980 scope.go:117] "RemoveContainer" containerID="882ee96934569ce83ce2b5c1e81a5acd141e7f0951d1e7057dc1a1740f34722b" Jan 07 03:54:33 crc kubenswrapper[4980]: E0107 03:54:33.170707 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"882ee96934569ce83ce2b5c1e81a5acd141e7f0951d1e7057dc1a1740f34722b\": container with ID starting with 882ee96934569ce83ce2b5c1e81a5acd141e7f0951d1e7057dc1a1740f34722b not found: ID does not exist" containerID="882ee96934569ce83ce2b5c1e81a5acd141e7f0951d1e7057dc1a1740f34722b" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.170754 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"882ee96934569ce83ce2b5c1e81a5acd141e7f0951d1e7057dc1a1740f34722b"} err="failed to get container status \"882ee96934569ce83ce2b5c1e81a5acd141e7f0951d1e7057dc1a1740f34722b\": rpc error: code = NotFound desc = could not find container \"882ee96934569ce83ce2b5c1e81a5acd141e7f0951d1e7057dc1a1740f34722b\": container with ID starting with 882ee96934569ce83ce2b5c1e81a5acd141e7f0951d1e7057dc1a1740f34722b not found: ID does not exist" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.170771 4980 scope.go:117] "RemoveContainer" containerID="16a10cc63fe3737551a1d1c1d38097ad76d70f15551bca8e701414288ead90f6" Jan 07 03:54:33 crc kubenswrapper[4980]: E0107 03:54:33.171022 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16a10cc63fe3737551a1d1c1d38097ad76d70f15551bca8e701414288ead90f6\": container with ID starting with 16a10cc63fe3737551a1d1c1d38097ad76d70f15551bca8e701414288ead90f6 not found: ID does not exist" containerID="16a10cc63fe3737551a1d1c1d38097ad76d70f15551bca8e701414288ead90f6" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.171055 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16a10cc63fe3737551a1d1c1d38097ad76d70f15551bca8e701414288ead90f6"} err="failed to get container status \"16a10cc63fe3737551a1d1c1d38097ad76d70f15551bca8e701414288ead90f6\": rpc error: code = NotFound desc = could not find container \"16a10cc63fe3737551a1d1c1d38097ad76d70f15551bca8e701414288ead90f6\": container with ID starting with 16a10cc63fe3737551a1d1c1d38097ad76d70f15551bca8e701414288ead90f6 not found: ID does not exist" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.177195 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 07 03:54:33 crc kubenswrapper[4980]: E0107 03:54:33.177544 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26440bb2-233e-47e3-bb46-9122523bce68" containerName="rabbitmq" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.177618 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="26440bb2-233e-47e3-bb46-9122523bce68" containerName="rabbitmq" Jan 07 03:54:33 crc kubenswrapper[4980]: E0107 03:54:33.177634 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26440bb2-233e-47e3-bb46-9122523bce68" containerName="setup-container" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.177641 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="26440bb2-233e-47e3-bb46-9122523bce68" containerName="setup-container" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.177830 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="26440bb2-233e-47e3-bb46-9122523bce68" containerName="rabbitmq" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.178760 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.180247 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-lkjrg" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.183049 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.183201 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.183616 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.183820 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.183960 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.184157 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.192510 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.272683 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-hkms8"] Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.274099 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.275797 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.303076 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-hkms8"] Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.323317 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ggt6\" (UniqueName: \"kubernetes.io/projected/af77d785-4fe8-4d72-a393-a7da215c4c55-kube-api-access-5ggt6\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.323415 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af77d785-4fe8-4d72-a393-a7da215c4c55-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.323440 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af77d785-4fe8-4d72-a393-a7da215c4c55-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.323455 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af77d785-4fe8-4d72-a393-a7da215c4c55-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.323485 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af77d785-4fe8-4d72-a393-a7da215c4c55-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.323501 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af77d785-4fe8-4d72-a393-a7da215c4c55-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.323524 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af77d785-4fe8-4d72-a393-a7da215c4c55-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.323546 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af77d785-4fe8-4d72-a393-a7da215c4c55-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.323601 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.323617 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af77d785-4fe8-4d72-a393-a7da215c4c55-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.323636 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af77d785-4fe8-4d72-a393-a7da215c4c55-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.425478 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af77d785-4fe8-4d72-a393-a7da215c4c55-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.425545 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.425620 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp4ls\" (UniqueName: \"kubernetes.io/projected/038f34a1-4ae2-45ba-a55b-159b5b53f642-kube-api-access-vp4ls\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.425669 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.425696 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af77d785-4fe8-4d72-a393-a7da215c4c55-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.425721 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-config\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.425745 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.425772 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af77d785-4fe8-4d72-a393-a7da215c4c55-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.425826 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ggt6\" (UniqueName: \"kubernetes.io/projected/af77d785-4fe8-4d72-a393-a7da215c4c55-kube-api-access-5ggt6\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.425847 4980 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.425870 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.425924 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.425965 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af77d785-4fe8-4d72-a393-a7da215c4c55-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.426093 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af77d785-4fe8-4d72-a393-a7da215c4c55-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.426132 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af77d785-4fe8-4d72-a393-a7da215c4c55-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.426210 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af77d785-4fe8-4d72-a393-a7da215c4c55-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.426237 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.426261 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af77d785-4fe8-4d72-a393-a7da215c4c55-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.426302 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af77d785-4fe8-4d72-a393-a7da215c4c55-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.426619 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af77d785-4fe8-4d72-a393-a7da215c4c55-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.426757 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af77d785-4fe8-4d72-a393-a7da215c4c55-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.426913 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af77d785-4fe8-4d72-a393-a7da215c4c55-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.427016 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af77d785-4fe8-4d72-a393-a7da215c4c55-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.427450 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af77d785-4fe8-4d72-a393-a7da215c4c55-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.429539 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af77d785-4fe8-4d72-a393-a7da215c4c55-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.429980 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af77d785-4fe8-4d72-a393-a7da215c4c55-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.430976 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af77d785-4fe8-4d72-a393-a7da215c4c55-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.431106 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af77d785-4fe8-4d72-a393-a7da215c4c55-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.445088 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ggt6\" (UniqueName: \"kubernetes.io/projected/af77d785-4fe8-4d72-a393-a7da215c4c55-kube-api-access-5ggt6\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.461280 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"af77d785-4fe8-4d72-a393-a7da215c4c55\") " pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.501935 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.528482 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.528578 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.528626 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.528653 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp4ls\" (UniqueName: \"kubernetes.io/projected/038f34a1-4ae2-45ba-a55b-159b5b53f642-kube-api-access-vp4ls\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.528682 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-config\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.528697 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.528747 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.529403 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.529403 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.529630 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.529708 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.529713 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-config\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.530938 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.545683 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp4ls\" (UniqueName: \"kubernetes.io/projected/038f34a1-4ae2-45ba-a55b-159b5b53f642-kube-api-access-vp4ls\") pod \"dnsmasq-dns-79bd4cc8c9-hkms8\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.601419 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.756169 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26440bb2-233e-47e3-bb46-9122523bce68" path="/var/lib/kubelet/pods/26440bb2-233e-47e3-bb46-9122523bce68/volumes" Jan 07 03:54:33 crc kubenswrapper[4980]: I0107 03:54:33.757291 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6714f510-9927-47da-bc8b-3e4a3995cdc6" path="/var/lib/kubelet/pods/6714f510-9927-47da-bc8b-3e4a3995cdc6/volumes" Jan 07 03:54:34 crc kubenswrapper[4980]: I0107 03:54:34.060500 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 07 03:54:34 crc kubenswrapper[4980]: I0107 03:54:34.122386 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"837d407a-b0ff-4fec-8c21-e30b95cd3d7b","Type":"ContainerStarted","Data":"a55d6f86bf7835d56fbe735701761a11c73bd04050dfc4d2a433f1eadbe0a4cf"} Jan 07 03:54:34 crc kubenswrapper[4980]: I0107 03:54:34.123902 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"af77d785-4fe8-4d72-a393-a7da215c4c55","Type":"ContainerStarted","Data":"0851298fd8a178842f30ae32234339a31ca2391cbcbbde288729d5bc00f6d7f3"} Jan 07 03:54:34 crc kubenswrapper[4980]: I0107 03:54:34.138221 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-hkms8"] Jan 07 03:54:34 crc kubenswrapper[4980]: W0107 03:54:34.140885 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod038f34a1_4ae2_45ba_a55b_159b5b53f642.slice/crio-26208a201d6420a04b02d41eb73793a51fb9cceefd40329e8642cd9d029c9c95 WatchSource:0}: Error finding container 26208a201d6420a04b02d41eb73793a51fb9cceefd40329e8642cd9d029c9c95: Status 404 returned error can't find the container with id 26208a201d6420a04b02d41eb73793a51fb9cceefd40329e8642cd9d029c9c95 Jan 07 03:54:35 crc kubenswrapper[4980]: I0107 03:54:35.137693 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"837d407a-b0ff-4fec-8c21-e30b95cd3d7b","Type":"ContainerStarted","Data":"ac15cfde8e6fc9c5c8c0d3bf492b286959aeb19bb6b6c35d46d84b0500d76661"} Jan 07 03:54:35 crc kubenswrapper[4980]: I0107 03:54:35.141721 4980 generic.go:334] "Generic (PLEG): container finished" podID="038f34a1-4ae2-45ba-a55b-159b5b53f642" containerID="d81af0d2cd622224c8d7447c3a38e0d72e231f6d6560bcfadcd0d63b0ce6c2c0" exitCode=0 Jan 07 03:54:35 crc kubenswrapper[4980]: I0107 03:54:35.141772 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" event={"ID":"038f34a1-4ae2-45ba-a55b-159b5b53f642","Type":"ContainerDied","Data":"d81af0d2cd622224c8d7447c3a38e0d72e231f6d6560bcfadcd0d63b0ce6c2c0"} Jan 07 03:54:35 crc kubenswrapper[4980]: I0107 03:54:35.141798 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" event={"ID":"038f34a1-4ae2-45ba-a55b-159b5b53f642","Type":"ContainerStarted","Data":"26208a201d6420a04b02d41eb73793a51fb9cceefd40329e8642cd9d029c9c95"} Jan 07 03:54:36 crc kubenswrapper[4980]: I0107 03:54:36.156925 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" event={"ID":"038f34a1-4ae2-45ba-a55b-159b5b53f642","Type":"ContainerStarted","Data":"42ab4c7ecca68984d207b60818251f59fc0cb3fa07b8e6964b31e28bdceeae3f"} Jan 07 03:54:36 crc kubenswrapper[4980]: I0107 03:54:36.158171 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:36 crc kubenswrapper[4980]: I0107 03:54:36.160687 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"af77d785-4fe8-4d72-a393-a7da215c4c55","Type":"ContainerStarted","Data":"cc64ad1d506c19058153132fbf6a182af647c29080387bd282ea3accae67fff9"} Jan 07 03:54:36 crc kubenswrapper[4980]: I0107 03:54:36.186782 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" podStartSLOduration=3.186762505 podStartE2EDuration="3.186762505s" podCreationTimestamp="2026-01-07 03:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:54:36.183585676 +0000 UTC m=+1322.749280451" watchObservedRunningTime="2026-01-07 03:54:36.186762505 +0000 UTC m=+1322.752457250" Jan 07 03:54:36 crc kubenswrapper[4980]: I0107 03:54:36.543944 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:54:36 crc kubenswrapper[4980]: I0107 03:54:36.544041 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:54:36 crc kubenswrapper[4980]: I0107 03:54:36.544114 4980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:54:36 crc kubenswrapper[4980]: I0107 03:54:36.545268 4980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9509c26a45d342c7045d51ecdcc5dc207e383364d36ad4c45d95a6527388485b"} pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 03:54:36 crc kubenswrapper[4980]: I0107 03:54:36.545391 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" containerID="cri-o://9509c26a45d342c7045d51ecdcc5dc207e383364d36ad4c45d95a6527388485b" gracePeriod=600 Jan 07 03:54:37 crc kubenswrapper[4980]: I0107 03:54:37.174471 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerID="9509c26a45d342c7045d51ecdcc5dc207e383364d36ad4c45d95a6527388485b" exitCode=0 Jan 07 03:54:37 crc kubenswrapper[4980]: I0107 03:54:37.174633 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerDied","Data":"9509c26a45d342c7045d51ecdcc5dc207e383364d36ad4c45d95a6527388485b"} Jan 07 03:54:37 crc kubenswrapper[4980]: I0107 03:54:37.176796 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862"} Jan 07 03:54:37 crc kubenswrapper[4980]: I0107 03:54:37.176839 4980 scope.go:117] "RemoveContainer" containerID="40fed48537fb4dd350c71735c8360a409552809cda596d45f1ede2d146d2801a" Jan 07 03:54:43 crc kubenswrapper[4980]: I0107 03:54:43.603836 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:43 crc kubenswrapper[4980]: I0107 03:54:43.679279 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-jqrfc"] Jan 07 03:54:43 crc kubenswrapper[4980]: I0107 03:54:43.679530 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" podUID="76e7cd2b-434d-48f9-8877-1395706691f4" containerName="dnsmasq-dns" containerID="cri-o://1194a3e34a6352a689f25e7333b32840ceca1ba433c13e271ed95f668a4b6392" gracePeriod=10 Jan 07 03:54:43 crc kubenswrapper[4980]: I0107 03:54:43.863570 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-n8zmd"] Jan 07 03:54:43 crc kubenswrapper[4980]: I0107 03:54:43.865169 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:43 crc kubenswrapper[4980]: I0107 03:54:43.898426 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-n8zmd"] Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.032989 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm77w\" (UniqueName: \"kubernetes.io/projected/2ac37f68-f3d9-42eb-a68c-d2526b730663-kube-api-access-rm77w\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.033296 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ac37f68-f3d9-42eb-a68c-d2526b730663-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.033348 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ac37f68-f3d9-42eb-a68c-d2526b730663-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.033388 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ac37f68-f3d9-42eb-a68c-d2526b730663-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.033726 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac37f68-f3d9-42eb-a68c-d2526b730663-config\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.033932 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ac37f68-f3d9-42eb-a68c-d2526b730663-dns-svc\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.034027 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ac37f68-f3d9-42eb-a68c-d2526b730663-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.135430 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac37f68-f3d9-42eb-a68c-d2526b730663-config\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.135529 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ac37f68-f3d9-42eb-a68c-d2526b730663-dns-svc\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.135614 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ac37f68-f3d9-42eb-a68c-d2526b730663-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.136481 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac37f68-f3d9-42eb-a68c-d2526b730663-config\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.136626 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ac37f68-f3d9-42eb-a68c-d2526b730663-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.136798 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ac37f68-f3d9-42eb-a68c-d2526b730663-dns-svc\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.136949 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm77w\" (UniqueName: \"kubernetes.io/projected/2ac37f68-f3d9-42eb-a68c-d2526b730663-kube-api-access-rm77w\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.137298 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ac37f68-f3d9-42eb-a68c-d2526b730663-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.137380 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ac37f68-f3d9-42eb-a68c-d2526b730663-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.137603 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ac37f68-f3d9-42eb-a68c-d2526b730663-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.138165 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ac37f68-f3d9-42eb-a68c-d2526b730663-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.138237 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ac37f68-f3d9-42eb-a68c-d2526b730663-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.138997 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ac37f68-f3d9-42eb-a68c-d2526b730663-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.156333 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm77w\" (UniqueName: \"kubernetes.io/projected/2ac37f68-f3d9-42eb-a68c-d2526b730663-kube-api-access-rm77w\") pod \"dnsmasq-dns-55478c4467-n8zmd\" (UID: \"2ac37f68-f3d9-42eb-a68c-d2526b730663\") " pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.191703 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.307312 4980 generic.go:334] "Generic (PLEG): container finished" podID="76e7cd2b-434d-48f9-8877-1395706691f4" containerID="1194a3e34a6352a689f25e7333b32840ceca1ba433c13e271ed95f668a4b6392" exitCode=0 Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.307352 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" event={"ID":"76e7cd2b-434d-48f9-8877-1395706691f4","Type":"ContainerDied","Data":"1194a3e34a6352a689f25e7333b32840ceca1ba433c13e271ed95f668a4b6392"} Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.307859 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" event={"ID":"76e7cd2b-434d-48f9-8877-1395706691f4","Type":"ContainerDied","Data":"c679ef651640597cbdb28c69398e669adfb1c6ad217a04c58d6a6d85b78ef2e1"} Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.307879 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c679ef651640597cbdb28c69398e669adfb1c6ad217a04c58d6a6d85b78ef2e1" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.310323 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.447167 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-ovsdbserver-nb\") pod \"76e7cd2b-434d-48f9-8877-1395706691f4\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.447457 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-ovsdbserver-sb\") pod \"76e7cd2b-434d-48f9-8877-1395706691f4\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.447488 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-config\") pod \"76e7cd2b-434d-48f9-8877-1395706691f4\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.447538 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-dns-svc\") pod \"76e7cd2b-434d-48f9-8877-1395706691f4\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.447801 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-dns-swift-storage-0\") pod \"76e7cd2b-434d-48f9-8877-1395706691f4\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.447834 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgv9h\" (UniqueName: \"kubernetes.io/projected/76e7cd2b-434d-48f9-8877-1395706691f4-kube-api-access-jgv9h\") pod \"76e7cd2b-434d-48f9-8877-1395706691f4\" (UID: \"76e7cd2b-434d-48f9-8877-1395706691f4\") " Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.452392 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e7cd2b-434d-48f9-8877-1395706691f4-kube-api-access-jgv9h" (OuterVolumeSpecName: "kube-api-access-jgv9h") pod "76e7cd2b-434d-48f9-8877-1395706691f4" (UID: "76e7cd2b-434d-48f9-8877-1395706691f4"). InnerVolumeSpecName "kube-api-access-jgv9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.497624 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "76e7cd2b-434d-48f9-8877-1395706691f4" (UID: "76e7cd2b-434d-48f9-8877-1395706691f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.499470 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-config" (OuterVolumeSpecName: "config") pod "76e7cd2b-434d-48f9-8877-1395706691f4" (UID: "76e7cd2b-434d-48f9-8877-1395706691f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.502238 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "76e7cd2b-434d-48f9-8877-1395706691f4" (UID: "76e7cd2b-434d-48f9-8877-1395706691f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.505293 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "76e7cd2b-434d-48f9-8877-1395706691f4" (UID: "76e7cd2b-434d-48f9-8877-1395706691f4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.519984 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "76e7cd2b-434d-48f9-8877-1395706691f4" (UID: "76e7cd2b-434d-48f9-8877-1395706691f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.553093 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.553117 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.553127 4980 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.553136 4980 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.553146 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgv9h\" (UniqueName: \"kubernetes.io/projected/76e7cd2b-434d-48f9-8877-1395706691f4-kube-api-access-jgv9h\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.553155 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e7cd2b-434d-48f9-8877-1395706691f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:44 crc kubenswrapper[4980]: I0107 03:54:44.671369 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-n8zmd"] Jan 07 03:54:44 crc kubenswrapper[4980]: W0107 03:54:44.673973 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ac37f68_f3d9_42eb_a68c_d2526b730663.slice/crio-a77d7d4aad1dbab93e426981aa8a83aad5e2009ec934aff72807a8dfa03d2147 WatchSource:0}: Error finding container a77d7d4aad1dbab93e426981aa8a83aad5e2009ec934aff72807a8dfa03d2147: Status 404 returned error can't find the container with id a77d7d4aad1dbab93e426981aa8a83aad5e2009ec934aff72807a8dfa03d2147 Jan 07 03:54:45 crc kubenswrapper[4980]: I0107 03:54:45.322627 4980 generic.go:334] "Generic (PLEG): container finished" podID="2ac37f68-f3d9-42eb-a68c-d2526b730663" containerID="d934952134ee45542eb351f37e229a268042453af148a8785a1f041487a6e72b" exitCode=0 Jan 07 03:54:45 crc kubenswrapper[4980]: I0107 03:54:45.322733 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-n8zmd" event={"ID":"2ac37f68-f3d9-42eb-a68c-d2526b730663","Type":"ContainerDied","Data":"d934952134ee45542eb351f37e229a268042453af148a8785a1f041487a6e72b"} Jan 07 03:54:45 crc kubenswrapper[4980]: I0107 03:54:45.323033 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-n8zmd" event={"ID":"2ac37f68-f3d9-42eb-a68c-d2526b730663","Type":"ContainerStarted","Data":"a77d7d4aad1dbab93e426981aa8a83aad5e2009ec934aff72807a8dfa03d2147"} Jan 07 03:54:45 crc kubenswrapper[4980]: I0107 03:54:45.323155 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-jqrfc" Jan 07 03:54:45 crc kubenswrapper[4980]: I0107 03:54:45.408721 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-jqrfc"] Jan 07 03:54:45 crc kubenswrapper[4980]: I0107 03:54:45.419580 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-jqrfc"] Jan 07 03:54:45 crc kubenswrapper[4980]: I0107 03:54:45.748278 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e7cd2b-434d-48f9-8877-1395706691f4" path="/var/lib/kubelet/pods/76e7cd2b-434d-48f9-8877-1395706691f4/volumes" Jan 07 03:54:46 crc kubenswrapper[4980]: I0107 03:54:46.346750 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-n8zmd" event={"ID":"2ac37f68-f3d9-42eb-a68c-d2526b730663","Type":"ContainerStarted","Data":"754058a8243dfad5035150be88f65250c0bd4bdb8c0ab62121060b8470aea240"} Jan 07 03:54:46 crc kubenswrapper[4980]: I0107 03:54:46.354250 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:54 crc kubenswrapper[4980]: I0107 03:54:54.193532 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-n8zmd" Jan 07 03:54:54 crc kubenswrapper[4980]: I0107 03:54:54.237030 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-n8zmd" podStartSLOduration=11.237002155 podStartE2EDuration="11.237002155s" podCreationTimestamp="2026-01-07 03:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:54:46.383983129 +0000 UTC m=+1332.949677934" watchObservedRunningTime="2026-01-07 03:54:54.237002155 +0000 UTC m=+1340.802696920" Jan 07 03:54:54 crc kubenswrapper[4980]: I0107 03:54:54.336850 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-hkms8"] Jan 07 03:54:54 crc kubenswrapper[4980]: I0107 03:54:54.337200 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" podUID="038f34a1-4ae2-45ba-a55b-159b5b53f642" containerName="dnsmasq-dns" containerID="cri-o://42ab4c7ecca68984d207b60818251f59fc0cb3fa07b8e6964b31e28bdceeae3f" gracePeriod=10 Jan 07 03:54:54 crc kubenswrapper[4980]: I0107 03:54:54.815897 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:54 crc kubenswrapper[4980]: I0107 03:54:54.875765 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-ovsdbserver-nb\") pod \"038f34a1-4ae2-45ba-a55b-159b5b53f642\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " Jan 07 03:54:54 crc kubenswrapper[4980]: I0107 03:54:54.931833 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "038f34a1-4ae2-45ba-a55b-159b5b53f642" (UID: "038f34a1-4ae2-45ba-a55b-159b5b53f642"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:54:54 crc kubenswrapper[4980]: I0107 03:54:54.976875 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-dns-svc\") pod \"038f34a1-4ae2-45ba-a55b-159b5b53f642\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " Jan 07 03:54:54 crc kubenswrapper[4980]: I0107 03:54:54.976934 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-config\") pod \"038f34a1-4ae2-45ba-a55b-159b5b53f642\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " Jan 07 03:54:54 crc kubenswrapper[4980]: I0107 03:54:54.976956 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp4ls\" (UniqueName: \"kubernetes.io/projected/038f34a1-4ae2-45ba-a55b-159b5b53f642-kube-api-access-vp4ls\") pod \"038f34a1-4ae2-45ba-a55b-159b5b53f642\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " Jan 07 03:54:54 crc kubenswrapper[4980]: I0107 03:54:54.976975 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-openstack-edpm-ipam\") pod \"038f34a1-4ae2-45ba-a55b-159b5b53f642\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " Jan 07 03:54:54 crc kubenswrapper[4980]: I0107 03:54:54.977028 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-ovsdbserver-sb\") pod \"038f34a1-4ae2-45ba-a55b-159b5b53f642\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " Jan 07 03:54:54 crc kubenswrapper[4980]: I0107 03:54:54.977229 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-dns-swift-storage-0\") pod \"038f34a1-4ae2-45ba-a55b-159b5b53f642\" (UID: \"038f34a1-4ae2-45ba-a55b-159b5b53f642\") " Jan 07 03:54:54 crc kubenswrapper[4980]: I0107 03:54:54.980040 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:54 crc kubenswrapper[4980]: I0107 03:54:54.983218 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/038f34a1-4ae2-45ba-a55b-159b5b53f642-kube-api-access-vp4ls" (OuterVolumeSpecName: "kube-api-access-vp4ls") pod "038f34a1-4ae2-45ba-a55b-159b5b53f642" (UID: "038f34a1-4ae2-45ba-a55b-159b5b53f642"). InnerVolumeSpecName "kube-api-access-vp4ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.029029 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "038f34a1-4ae2-45ba-a55b-159b5b53f642" (UID: "038f34a1-4ae2-45ba-a55b-159b5b53f642"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.041128 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "038f34a1-4ae2-45ba-a55b-159b5b53f642" (UID: "038f34a1-4ae2-45ba-a55b-159b5b53f642"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.048862 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "038f34a1-4ae2-45ba-a55b-159b5b53f642" (UID: "038f34a1-4ae2-45ba-a55b-159b5b53f642"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.048947 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "038f34a1-4ae2-45ba-a55b-159b5b53f642" (UID: "038f34a1-4ae2-45ba-a55b-159b5b53f642"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.056338 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-config" (OuterVolumeSpecName: "config") pod "038f34a1-4ae2-45ba-a55b-159b5b53f642" (UID: "038f34a1-4ae2-45ba-a55b-159b5b53f642"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.082000 4980 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.084660 4980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-config\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.084681 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp4ls\" (UniqueName: \"kubernetes.io/projected/038f34a1-4ae2-45ba-a55b-159b5b53f642-kube-api-access-vp4ls\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.084693 4980 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.084710 4980 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.084722 4980 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/038f34a1-4ae2-45ba-a55b-159b5b53f642-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.456324 4980 generic.go:334] "Generic (PLEG): container finished" podID="038f34a1-4ae2-45ba-a55b-159b5b53f642" containerID="42ab4c7ecca68984d207b60818251f59fc0cb3fa07b8e6964b31e28bdceeae3f" exitCode=0 Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.456388 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.456407 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" event={"ID":"038f34a1-4ae2-45ba-a55b-159b5b53f642","Type":"ContainerDied","Data":"42ab4c7ecca68984d207b60818251f59fc0cb3fa07b8e6964b31e28bdceeae3f"} Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.456446 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-hkms8" event={"ID":"038f34a1-4ae2-45ba-a55b-159b5b53f642","Type":"ContainerDied","Data":"26208a201d6420a04b02d41eb73793a51fb9cceefd40329e8642cd9d029c9c95"} Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.456494 4980 scope.go:117] "RemoveContainer" containerID="42ab4c7ecca68984d207b60818251f59fc0cb3fa07b8e6964b31e28bdceeae3f" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.558996 4980 scope.go:117] "RemoveContainer" containerID="d81af0d2cd622224c8d7447c3a38e0d72e231f6d6560bcfadcd0d63b0ce6c2c0" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.559218 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-hkms8"] Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.567906 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-hkms8"] Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.590582 4980 scope.go:117] "RemoveContainer" containerID="42ab4c7ecca68984d207b60818251f59fc0cb3fa07b8e6964b31e28bdceeae3f" Jan 07 03:54:55 crc kubenswrapper[4980]: E0107 03:54:55.591074 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42ab4c7ecca68984d207b60818251f59fc0cb3fa07b8e6964b31e28bdceeae3f\": container with ID starting with 42ab4c7ecca68984d207b60818251f59fc0cb3fa07b8e6964b31e28bdceeae3f not found: ID does not exist" containerID="42ab4c7ecca68984d207b60818251f59fc0cb3fa07b8e6964b31e28bdceeae3f" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.591237 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42ab4c7ecca68984d207b60818251f59fc0cb3fa07b8e6964b31e28bdceeae3f"} err="failed to get container status \"42ab4c7ecca68984d207b60818251f59fc0cb3fa07b8e6964b31e28bdceeae3f\": rpc error: code = NotFound desc = could not find container \"42ab4c7ecca68984d207b60818251f59fc0cb3fa07b8e6964b31e28bdceeae3f\": container with ID starting with 42ab4c7ecca68984d207b60818251f59fc0cb3fa07b8e6964b31e28bdceeae3f not found: ID does not exist" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.591384 4980 scope.go:117] "RemoveContainer" containerID="d81af0d2cd622224c8d7447c3a38e0d72e231f6d6560bcfadcd0d63b0ce6c2c0" Jan 07 03:54:55 crc kubenswrapper[4980]: E0107 03:54:55.592081 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d81af0d2cd622224c8d7447c3a38e0d72e231f6d6560bcfadcd0d63b0ce6c2c0\": container with ID starting with d81af0d2cd622224c8d7447c3a38e0d72e231f6d6560bcfadcd0d63b0ce6c2c0 not found: ID does not exist" containerID="d81af0d2cd622224c8d7447c3a38e0d72e231f6d6560bcfadcd0d63b0ce6c2c0" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.592123 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81af0d2cd622224c8d7447c3a38e0d72e231f6d6560bcfadcd0d63b0ce6c2c0"} err="failed to get container status \"d81af0d2cd622224c8d7447c3a38e0d72e231f6d6560bcfadcd0d63b0ce6c2c0\": rpc error: code = NotFound desc = could not find container \"d81af0d2cd622224c8d7447c3a38e0d72e231f6d6560bcfadcd0d63b0ce6c2c0\": container with ID starting with d81af0d2cd622224c8d7447c3a38e0d72e231f6d6560bcfadcd0d63b0ce6c2c0 not found: ID does not exist" Jan 07 03:54:55 crc kubenswrapper[4980]: I0107 03:54:55.752294 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="038f34a1-4ae2-45ba-a55b-159b5b53f642" path="/var/lib/kubelet/pods/038f34a1-4ae2-45ba-a55b-159b5b53f642/volumes" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.089185 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs"] Jan 07 03:55:03 crc kubenswrapper[4980]: E0107 03:55:03.090126 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e7cd2b-434d-48f9-8877-1395706691f4" containerName="dnsmasq-dns" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.090141 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e7cd2b-434d-48f9-8877-1395706691f4" containerName="dnsmasq-dns" Jan 07 03:55:03 crc kubenswrapper[4980]: E0107 03:55:03.090161 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="038f34a1-4ae2-45ba-a55b-159b5b53f642" containerName="init" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.090168 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="038f34a1-4ae2-45ba-a55b-159b5b53f642" containerName="init" Jan 07 03:55:03 crc kubenswrapper[4980]: E0107 03:55:03.090193 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e7cd2b-434d-48f9-8877-1395706691f4" containerName="init" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.090200 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e7cd2b-434d-48f9-8877-1395706691f4" containerName="init" Jan 07 03:55:03 crc kubenswrapper[4980]: E0107 03:55:03.090209 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="038f34a1-4ae2-45ba-a55b-159b5b53f642" containerName="dnsmasq-dns" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.090214 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="038f34a1-4ae2-45ba-a55b-159b5b53f642" containerName="dnsmasq-dns" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.090391 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e7cd2b-434d-48f9-8877-1395706691f4" containerName="dnsmasq-dns" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.090414 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="038f34a1-4ae2-45ba-a55b-159b5b53f642" containerName="dnsmasq-dns" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.091008 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.093419 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.093500 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.093668 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7ttf" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.094458 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.119456 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs"] Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.253695 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c12fa92-7a85-42c7-90f2-3b837c2067f8-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs\" (UID: \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.253799 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c12fa92-7a85-42c7-90f2-3b837c2067f8-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs\" (UID: \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.254002 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5bkq\" (UniqueName: \"kubernetes.io/projected/6c12fa92-7a85-42c7-90f2-3b837c2067f8-kube-api-access-m5bkq\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs\" (UID: \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.254051 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c12fa92-7a85-42c7-90f2-3b837c2067f8-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs\" (UID: \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.356510 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c12fa92-7a85-42c7-90f2-3b837c2067f8-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs\" (UID: \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.356638 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c12fa92-7a85-42c7-90f2-3b837c2067f8-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs\" (UID: \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.356777 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5bkq\" (UniqueName: \"kubernetes.io/projected/6c12fa92-7a85-42c7-90f2-3b837c2067f8-kube-api-access-m5bkq\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs\" (UID: \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.356821 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c12fa92-7a85-42c7-90f2-3b837c2067f8-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs\" (UID: \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.365294 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c12fa92-7a85-42c7-90f2-3b837c2067f8-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs\" (UID: \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.366522 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c12fa92-7a85-42c7-90f2-3b837c2067f8-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs\" (UID: \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.367421 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c12fa92-7a85-42c7-90f2-3b837c2067f8-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs\" (UID: \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.379433 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5bkq\" (UniqueName: \"kubernetes.io/projected/6c12fa92-7a85-42c7-90f2-3b837c2067f8-kube-api-access-m5bkq\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs\" (UID: \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" Jan 07 03:55:03 crc kubenswrapper[4980]: I0107 03:55:03.422862 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" Jan 07 03:55:04 crc kubenswrapper[4980]: I0107 03:55:04.067032 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs"] Jan 07 03:55:04 crc kubenswrapper[4980]: W0107 03:55:04.074197 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c12fa92_7a85_42c7_90f2_3b837c2067f8.slice/crio-0bb604f46c4b8f05f281abbabbadbae1c1fa05caed92337c3a589869241b206d WatchSource:0}: Error finding container 0bb604f46c4b8f05f281abbabbadbae1c1fa05caed92337c3a589869241b206d: Status 404 returned error can't find the container with id 0bb604f46c4b8f05f281abbabbadbae1c1fa05caed92337c3a589869241b206d Jan 07 03:55:04 crc kubenswrapper[4980]: I0107 03:55:04.564988 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" event={"ID":"6c12fa92-7a85-42c7-90f2-3b837c2067f8","Type":"ContainerStarted","Data":"0bb604f46c4b8f05f281abbabbadbae1c1fa05caed92337c3a589869241b206d"} Jan 07 03:55:07 crc kubenswrapper[4980]: I0107 03:55:07.601316 4980 generic.go:334] "Generic (PLEG): container finished" podID="837d407a-b0ff-4fec-8c21-e30b95cd3d7b" containerID="ac15cfde8e6fc9c5c8c0d3bf492b286959aeb19bb6b6c35d46d84b0500d76661" exitCode=0 Jan 07 03:55:07 crc kubenswrapper[4980]: I0107 03:55:07.601440 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"837d407a-b0ff-4fec-8c21-e30b95cd3d7b","Type":"ContainerDied","Data":"ac15cfde8e6fc9c5c8c0d3bf492b286959aeb19bb6b6c35d46d84b0500d76661"} Jan 07 03:55:10 crc kubenswrapper[4980]: I0107 03:55:10.636459 4980 generic.go:334] "Generic (PLEG): container finished" podID="af77d785-4fe8-4d72-a393-a7da215c4c55" containerID="cc64ad1d506c19058153132fbf6a182af647c29080387bd282ea3accae67fff9" exitCode=0 Jan 07 03:55:10 crc kubenswrapper[4980]: I0107 03:55:10.636539 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"af77d785-4fe8-4d72-a393-a7da215c4c55","Type":"ContainerDied","Data":"cc64ad1d506c19058153132fbf6a182af647c29080387bd282ea3accae67fff9"} Jan 07 03:55:13 crc kubenswrapper[4980]: I0107 03:55:13.697846 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"af77d785-4fe8-4d72-a393-a7da215c4c55","Type":"ContainerStarted","Data":"8a38cacd9ae87628a8f488289f146c98fc381195cf94363f118fea61a83188e3"} Jan 07 03:55:13 crc kubenswrapper[4980]: I0107 03:55:13.698509 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:55:13 crc kubenswrapper[4980]: I0107 03:55:13.699463 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" event={"ID":"6c12fa92-7a85-42c7-90f2-3b837c2067f8","Type":"ContainerStarted","Data":"049b4d15608c2ad482b28f2d85d1261649c7b313eee68db60b5d92d8b56b8a6c"} Jan 07 03:55:13 crc kubenswrapper[4980]: I0107 03:55:13.701534 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"837d407a-b0ff-4fec-8c21-e30b95cd3d7b","Type":"ContainerStarted","Data":"d05246dc33157d5c514b2633d444ca48cc3a2bde14990f0ae10bed8fa23d2165"} Jan 07 03:55:13 crc kubenswrapper[4980]: I0107 03:55:13.701791 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 07 03:55:13 crc kubenswrapper[4980]: I0107 03:55:13.726617 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.726597973 podStartE2EDuration="40.726597973s" podCreationTimestamp="2026-01-07 03:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:55:13.723029262 +0000 UTC m=+1360.288724007" watchObservedRunningTime="2026-01-07 03:55:13.726597973 +0000 UTC m=+1360.292292708" Jan 07 03:55:13 crc kubenswrapper[4980]: I0107 03:55:13.751716 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.751695972 podStartE2EDuration="41.751695972s" podCreationTimestamp="2026-01-07 03:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 03:55:13.748378939 +0000 UTC m=+1360.314073684" watchObservedRunningTime="2026-01-07 03:55:13.751695972 +0000 UTC m=+1360.317390717" Jan 07 03:55:13 crc kubenswrapper[4980]: I0107 03:55:13.779041 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" podStartSLOduration=1.5245837949999999 podStartE2EDuration="10.779026731s" podCreationTimestamp="2026-01-07 03:55:03 +0000 UTC" firstStartedPulling="2026-01-07 03:55:04.076203871 +0000 UTC m=+1350.641898616" lastFinishedPulling="2026-01-07 03:55:13.330646797 +0000 UTC m=+1359.896341552" observedRunningTime="2026-01-07 03:55:13.76513571 +0000 UTC m=+1360.330830455" watchObservedRunningTime="2026-01-07 03:55:13.779026731 +0000 UTC m=+1360.344721466" Jan 07 03:55:21 crc kubenswrapper[4980]: I0107 03:55:21.401443 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2skrr"] Jan 07 03:55:21 crc kubenswrapper[4980]: I0107 03:55:21.404626 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2skrr" Jan 07 03:55:21 crc kubenswrapper[4980]: I0107 03:55:21.415755 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2skrr"] Jan 07 03:55:21 crc kubenswrapper[4980]: I0107 03:55:21.570584 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/927fc97c-ef21-4121-af6d-ab1d596e3df5-catalog-content\") pod \"redhat-operators-2skrr\" (UID: \"927fc97c-ef21-4121-af6d-ab1d596e3df5\") " pod="openshift-marketplace/redhat-operators-2skrr" Jan 07 03:55:21 crc kubenswrapper[4980]: I0107 03:55:21.570636 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/927fc97c-ef21-4121-af6d-ab1d596e3df5-utilities\") pod \"redhat-operators-2skrr\" (UID: \"927fc97c-ef21-4121-af6d-ab1d596e3df5\") " pod="openshift-marketplace/redhat-operators-2skrr" Jan 07 03:55:21 crc kubenswrapper[4980]: I0107 03:55:21.570665 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv6sm\" (UniqueName: \"kubernetes.io/projected/927fc97c-ef21-4121-af6d-ab1d596e3df5-kube-api-access-sv6sm\") pod \"redhat-operators-2skrr\" (UID: \"927fc97c-ef21-4121-af6d-ab1d596e3df5\") " pod="openshift-marketplace/redhat-operators-2skrr" Jan 07 03:55:21 crc kubenswrapper[4980]: I0107 03:55:21.672473 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/927fc97c-ef21-4121-af6d-ab1d596e3df5-catalog-content\") pod \"redhat-operators-2skrr\" (UID: \"927fc97c-ef21-4121-af6d-ab1d596e3df5\") " pod="openshift-marketplace/redhat-operators-2skrr" Jan 07 03:55:21 crc kubenswrapper[4980]: I0107 03:55:21.672515 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/927fc97c-ef21-4121-af6d-ab1d596e3df5-utilities\") pod \"redhat-operators-2skrr\" (UID: \"927fc97c-ef21-4121-af6d-ab1d596e3df5\") " pod="openshift-marketplace/redhat-operators-2skrr" Jan 07 03:55:21 crc kubenswrapper[4980]: I0107 03:55:21.672534 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv6sm\" (UniqueName: \"kubernetes.io/projected/927fc97c-ef21-4121-af6d-ab1d596e3df5-kube-api-access-sv6sm\") pod \"redhat-operators-2skrr\" (UID: \"927fc97c-ef21-4121-af6d-ab1d596e3df5\") " pod="openshift-marketplace/redhat-operators-2skrr" Jan 07 03:55:21 crc kubenswrapper[4980]: I0107 03:55:21.673186 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/927fc97c-ef21-4121-af6d-ab1d596e3df5-catalog-content\") pod \"redhat-operators-2skrr\" (UID: \"927fc97c-ef21-4121-af6d-ab1d596e3df5\") " pod="openshift-marketplace/redhat-operators-2skrr" Jan 07 03:55:21 crc kubenswrapper[4980]: I0107 03:55:21.673394 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/927fc97c-ef21-4121-af6d-ab1d596e3df5-utilities\") pod \"redhat-operators-2skrr\" (UID: \"927fc97c-ef21-4121-af6d-ab1d596e3df5\") " pod="openshift-marketplace/redhat-operators-2skrr" Jan 07 03:55:21 crc kubenswrapper[4980]: I0107 03:55:21.696047 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv6sm\" (UniqueName: \"kubernetes.io/projected/927fc97c-ef21-4121-af6d-ab1d596e3df5-kube-api-access-sv6sm\") pod \"redhat-operators-2skrr\" (UID: \"927fc97c-ef21-4121-af6d-ab1d596e3df5\") " pod="openshift-marketplace/redhat-operators-2skrr" Jan 07 03:55:21 crc kubenswrapper[4980]: I0107 03:55:21.728338 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2skrr" Jan 07 03:55:22 crc kubenswrapper[4980]: I0107 03:55:22.279068 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2skrr"] Jan 07 03:55:22 crc kubenswrapper[4980]: I0107 03:55:22.830139 4980 generic.go:334] "Generic (PLEG): container finished" podID="927fc97c-ef21-4121-af6d-ab1d596e3df5" containerID="2b983da1dc974d5db8119c62e9a27922eb2e82e6b294351ce8231e52a1e54bbc" exitCode=0 Jan 07 03:55:22 crc kubenswrapper[4980]: I0107 03:55:22.830324 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2skrr" event={"ID":"927fc97c-ef21-4121-af6d-ab1d596e3df5","Type":"ContainerDied","Data":"2b983da1dc974d5db8119c62e9a27922eb2e82e6b294351ce8231e52a1e54bbc"} Jan 07 03:55:22 crc kubenswrapper[4980]: I0107 03:55:22.830509 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2skrr" event={"ID":"927fc97c-ef21-4121-af6d-ab1d596e3df5","Type":"ContainerStarted","Data":"5ed7626e56a53e2ee23005d763e3dcf1b3f5dc522939768560034f4c88ffd1a1"} Jan 07 03:55:23 crc kubenswrapper[4980]: I0107 03:55:23.504741 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 07 03:55:23 crc kubenswrapper[4980]: I0107 03:55:23.598173 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6ff7l"] Jan 07 03:55:23 crc kubenswrapper[4980]: I0107 03:55:23.601126 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6ff7l" Jan 07 03:55:23 crc kubenswrapper[4980]: I0107 03:55:23.622902 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ff7l"] Jan 07 03:55:23 crc kubenswrapper[4980]: I0107 03:55:23.713525 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56d8822f-16a7-4b38-ad4c-459796d41c33-utilities\") pod \"redhat-marketplace-6ff7l\" (UID: \"56d8822f-16a7-4b38-ad4c-459796d41c33\") " pod="openshift-marketplace/redhat-marketplace-6ff7l" Jan 07 03:55:23 crc kubenswrapper[4980]: I0107 03:55:23.713597 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56d8822f-16a7-4b38-ad4c-459796d41c33-catalog-content\") pod \"redhat-marketplace-6ff7l\" (UID: \"56d8822f-16a7-4b38-ad4c-459796d41c33\") " pod="openshift-marketplace/redhat-marketplace-6ff7l" Jan 07 03:55:23 crc kubenswrapper[4980]: I0107 03:55:23.713670 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qddjk\" (UniqueName: \"kubernetes.io/projected/56d8822f-16a7-4b38-ad4c-459796d41c33-kube-api-access-qddjk\") pod \"redhat-marketplace-6ff7l\" (UID: \"56d8822f-16a7-4b38-ad4c-459796d41c33\") " pod="openshift-marketplace/redhat-marketplace-6ff7l" Jan 07 03:55:23 crc kubenswrapper[4980]: I0107 03:55:23.816136 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qddjk\" (UniqueName: \"kubernetes.io/projected/56d8822f-16a7-4b38-ad4c-459796d41c33-kube-api-access-qddjk\") pod \"redhat-marketplace-6ff7l\" (UID: \"56d8822f-16a7-4b38-ad4c-459796d41c33\") " pod="openshift-marketplace/redhat-marketplace-6ff7l" Jan 07 03:55:23 crc kubenswrapper[4980]: I0107 03:55:23.816292 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56d8822f-16a7-4b38-ad4c-459796d41c33-utilities\") pod \"redhat-marketplace-6ff7l\" (UID: \"56d8822f-16a7-4b38-ad4c-459796d41c33\") " pod="openshift-marketplace/redhat-marketplace-6ff7l" Jan 07 03:55:23 crc kubenswrapper[4980]: I0107 03:55:23.816317 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56d8822f-16a7-4b38-ad4c-459796d41c33-catalog-content\") pod \"redhat-marketplace-6ff7l\" (UID: \"56d8822f-16a7-4b38-ad4c-459796d41c33\") " pod="openshift-marketplace/redhat-marketplace-6ff7l" Jan 07 03:55:23 crc kubenswrapper[4980]: I0107 03:55:23.816779 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56d8822f-16a7-4b38-ad4c-459796d41c33-utilities\") pod \"redhat-marketplace-6ff7l\" (UID: \"56d8822f-16a7-4b38-ad4c-459796d41c33\") " pod="openshift-marketplace/redhat-marketplace-6ff7l" Jan 07 03:55:23 crc kubenswrapper[4980]: I0107 03:55:23.816862 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56d8822f-16a7-4b38-ad4c-459796d41c33-catalog-content\") pod \"redhat-marketplace-6ff7l\" (UID: \"56d8822f-16a7-4b38-ad4c-459796d41c33\") " pod="openshift-marketplace/redhat-marketplace-6ff7l" Jan 07 03:55:23 crc kubenswrapper[4980]: I0107 03:55:23.837976 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qddjk\" (UniqueName: \"kubernetes.io/projected/56d8822f-16a7-4b38-ad4c-459796d41c33-kube-api-access-qddjk\") pod \"redhat-marketplace-6ff7l\" (UID: \"56d8822f-16a7-4b38-ad4c-459796d41c33\") " pod="openshift-marketplace/redhat-marketplace-6ff7l" Jan 07 03:55:23 crc kubenswrapper[4980]: I0107 03:55:23.856534 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2skrr" event={"ID":"927fc97c-ef21-4121-af6d-ab1d596e3df5","Type":"ContainerStarted","Data":"971de35a14aa59bb652c825ec7aa3b871749ddb4292293039fce4073791824d5"} Jan 07 03:55:23 crc kubenswrapper[4980]: I0107 03:55:23.943726 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6ff7l" Jan 07 03:55:24 crc kubenswrapper[4980]: I0107 03:55:24.520879 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ff7l"] Jan 07 03:55:24 crc kubenswrapper[4980]: I0107 03:55:24.872125 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ff7l" event={"ID":"56d8822f-16a7-4b38-ad4c-459796d41c33","Type":"ContainerStarted","Data":"c3d5ee27e4e71ea184caa7978250a782c9ec52d536f47ad1aa9f2ee6d4aaa42b"} Jan 07 03:55:24 crc kubenswrapper[4980]: I0107 03:55:24.874987 4980 generic.go:334] "Generic (PLEG): container finished" podID="6c12fa92-7a85-42c7-90f2-3b837c2067f8" containerID="049b4d15608c2ad482b28f2d85d1261649c7b313eee68db60b5d92d8b56b8a6c" exitCode=0 Jan 07 03:55:24 crc kubenswrapper[4980]: I0107 03:55:24.875102 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" event={"ID":"6c12fa92-7a85-42c7-90f2-3b837c2067f8","Type":"ContainerDied","Data":"049b4d15608c2ad482b28f2d85d1261649c7b313eee68db60b5d92d8b56b8a6c"} Jan 07 03:55:25 crc kubenswrapper[4980]: I0107 03:55:25.885275 4980 generic.go:334] "Generic (PLEG): container finished" podID="927fc97c-ef21-4121-af6d-ab1d596e3df5" containerID="971de35a14aa59bb652c825ec7aa3b871749ddb4292293039fce4073791824d5" exitCode=0 Jan 07 03:55:25 crc kubenswrapper[4980]: I0107 03:55:25.885334 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2skrr" event={"ID":"927fc97c-ef21-4121-af6d-ab1d596e3df5","Type":"ContainerDied","Data":"971de35a14aa59bb652c825ec7aa3b871749ddb4292293039fce4073791824d5"} Jan 07 03:55:25 crc kubenswrapper[4980]: I0107 03:55:25.889712 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ff7l" event={"ID":"56d8822f-16a7-4b38-ad4c-459796d41c33","Type":"ContainerStarted","Data":"97a6ff01b2359ce2f621d5682c59c252dbd5ae2f84426b8ae04f5580c2d52837"} Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.513290 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.679277 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c12fa92-7a85-42c7-90f2-3b837c2067f8-repo-setup-combined-ca-bundle\") pod \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\" (UID: \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\") " Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.679409 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5bkq\" (UniqueName: \"kubernetes.io/projected/6c12fa92-7a85-42c7-90f2-3b837c2067f8-kube-api-access-m5bkq\") pod \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\" (UID: \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\") " Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.679443 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c12fa92-7a85-42c7-90f2-3b837c2067f8-inventory\") pod \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\" (UID: \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\") " Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.679498 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c12fa92-7a85-42c7-90f2-3b837c2067f8-ssh-key-openstack-edpm-ipam\") pod \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\" (UID: \"6c12fa92-7a85-42c7-90f2-3b837c2067f8\") " Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.686786 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c12fa92-7a85-42c7-90f2-3b837c2067f8-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "6c12fa92-7a85-42c7-90f2-3b837c2067f8" (UID: "6c12fa92-7a85-42c7-90f2-3b837c2067f8"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.689666 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c12fa92-7a85-42c7-90f2-3b837c2067f8-kube-api-access-m5bkq" (OuterVolumeSpecName: "kube-api-access-m5bkq") pod "6c12fa92-7a85-42c7-90f2-3b837c2067f8" (UID: "6c12fa92-7a85-42c7-90f2-3b837c2067f8"). InnerVolumeSpecName "kube-api-access-m5bkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.711149 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c12fa92-7a85-42c7-90f2-3b837c2067f8-inventory" (OuterVolumeSpecName: "inventory") pod "6c12fa92-7a85-42c7-90f2-3b837c2067f8" (UID: "6c12fa92-7a85-42c7-90f2-3b837c2067f8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.717771 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c12fa92-7a85-42c7-90f2-3b837c2067f8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6c12fa92-7a85-42c7-90f2-3b837c2067f8" (UID: "6c12fa92-7a85-42c7-90f2-3b837c2067f8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.781544 4980 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c12fa92-7a85-42c7-90f2-3b837c2067f8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.781597 4980 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c12fa92-7a85-42c7-90f2-3b837c2067f8-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.781611 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5bkq\" (UniqueName: \"kubernetes.io/projected/6c12fa92-7a85-42c7-90f2-3b837c2067f8-kube-api-access-m5bkq\") on node \"crc\" DevicePath \"\"" Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.781624 4980 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c12fa92-7a85-42c7-90f2-3b837c2067f8-inventory\") on node \"crc\" DevicePath \"\"" Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.901395 4980 generic.go:334] "Generic (PLEG): container finished" podID="56d8822f-16a7-4b38-ad4c-459796d41c33" containerID="97a6ff01b2359ce2f621d5682c59c252dbd5ae2f84426b8ae04f5580c2d52837" exitCode=0 Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.901465 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ff7l" event={"ID":"56d8822f-16a7-4b38-ad4c-459796d41c33","Type":"ContainerDied","Data":"97a6ff01b2359ce2f621d5682c59c252dbd5ae2f84426b8ae04f5580c2d52837"} Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.907711 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2skrr" event={"ID":"927fc97c-ef21-4121-af6d-ab1d596e3df5","Type":"ContainerStarted","Data":"91ab7ec49d9a1c3fa242633b974037941d160e9f99a9d951db106cea6aa6eb65"} Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.912719 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" event={"ID":"6c12fa92-7a85-42c7-90f2-3b837c2067f8","Type":"ContainerDied","Data":"0bb604f46c4b8f05f281abbabbadbae1c1fa05caed92337c3a589869241b206d"} Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.912770 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bb604f46c4b8f05f281abbabbadbae1c1fa05caed92337c3a589869241b206d" Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.912786 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs" Jan 07 03:55:26 crc kubenswrapper[4980]: I0107 03:55:26.976160 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2skrr" podStartSLOduration=2.444319952 podStartE2EDuration="5.976137592s" podCreationTimestamp="2026-01-07 03:55:21 +0000 UTC" firstStartedPulling="2026-01-07 03:55:22.832946047 +0000 UTC m=+1369.398640782" lastFinishedPulling="2026-01-07 03:55:26.364763677 +0000 UTC m=+1372.930458422" observedRunningTime="2026-01-07 03:55:26.960344583 +0000 UTC m=+1373.526039338" watchObservedRunningTime="2026-01-07 03:55:26.976137592 +0000 UTC m=+1373.541832327" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.002298 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v"] Jan 07 03:55:27 crc kubenswrapper[4980]: E0107 03:55:27.002700 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c12fa92-7a85-42c7-90f2-3b837c2067f8" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.002724 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c12fa92-7a85-42c7-90f2-3b837c2067f8" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.002907 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c12fa92-7a85-42c7-90f2-3b837c2067f8" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.003566 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.007923 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.007979 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.008032 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.008447 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7ttf" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.018651 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v"] Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.189418 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b42fafe-35e4-45a7-b3c9-95d8b9caa607-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-phv2v\" (UID: \"3b42fafe-35e4-45a7-b3c9-95d8b9caa607\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.189546 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b42fafe-35e4-45a7-b3c9-95d8b9caa607-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-phv2v\" (UID: \"3b42fafe-35e4-45a7-b3c9-95d8b9caa607\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.189649 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kstj\" (UniqueName: \"kubernetes.io/projected/3b42fafe-35e4-45a7-b3c9-95d8b9caa607-kube-api-access-9kstj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-phv2v\" (UID: \"3b42fafe-35e4-45a7-b3c9-95d8b9caa607\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.291651 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b42fafe-35e4-45a7-b3c9-95d8b9caa607-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-phv2v\" (UID: \"3b42fafe-35e4-45a7-b3c9-95d8b9caa607\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.291797 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b42fafe-35e4-45a7-b3c9-95d8b9caa607-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-phv2v\" (UID: \"3b42fafe-35e4-45a7-b3c9-95d8b9caa607\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.291882 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kstj\" (UniqueName: \"kubernetes.io/projected/3b42fafe-35e4-45a7-b3c9-95d8b9caa607-kube-api-access-9kstj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-phv2v\" (UID: \"3b42fafe-35e4-45a7-b3c9-95d8b9caa607\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.297081 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b42fafe-35e4-45a7-b3c9-95d8b9caa607-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-phv2v\" (UID: \"3b42fafe-35e4-45a7-b3c9-95d8b9caa607\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.297079 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b42fafe-35e4-45a7-b3c9-95d8b9caa607-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-phv2v\" (UID: \"3b42fafe-35e4-45a7-b3c9-95d8b9caa607\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.316961 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kstj\" (UniqueName: \"kubernetes.io/projected/3b42fafe-35e4-45a7-b3c9-95d8b9caa607-kube-api-access-9kstj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-phv2v\" (UID: \"3b42fafe-35e4-45a7-b3c9-95d8b9caa607\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.334569 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v" Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.940487 4980 generic.go:334] "Generic (PLEG): container finished" podID="56d8822f-16a7-4b38-ad4c-459796d41c33" containerID="c83c14064ee81106e39b1bbae3e5eece390c075b92790d871b0c33d9c0203c7a" exitCode=0 Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.940592 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ff7l" event={"ID":"56d8822f-16a7-4b38-ad4c-459796d41c33","Type":"ContainerDied","Data":"c83c14064ee81106e39b1bbae3e5eece390c075b92790d871b0c33d9c0203c7a"} Jan 07 03:55:27 crc kubenswrapper[4980]: I0107 03:55:27.945761 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v"] Jan 07 03:55:28 crc kubenswrapper[4980]: I0107 03:55:28.952959 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ff7l" event={"ID":"56d8822f-16a7-4b38-ad4c-459796d41c33","Type":"ContainerStarted","Data":"c810a9d2677652b4c7b5ce4648caef6b0fbb055e3f08e0f0cd16d58e2591840c"} Jan 07 03:55:28 crc kubenswrapper[4980]: I0107 03:55:28.955876 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v" event={"ID":"3b42fafe-35e4-45a7-b3c9-95d8b9caa607","Type":"ContainerStarted","Data":"829d94e1e8ee19798916d3d55d47f444ad30abc7d95e03b565fb26e429cd9751"} Jan 07 03:55:28 crc kubenswrapper[4980]: I0107 03:55:28.955910 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v" event={"ID":"3b42fafe-35e4-45a7-b3c9-95d8b9caa607","Type":"ContainerStarted","Data":"e5d2da0807691fca385dcd68233e2e4f705d2c8d751a0f38b37890447fe8c263"} Jan 07 03:55:28 crc kubenswrapper[4980]: I0107 03:55:28.977028 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6ff7l" podStartSLOduration=4.462134852 podStartE2EDuration="5.977012878s" podCreationTimestamp="2026-01-07 03:55:23 +0000 UTC" firstStartedPulling="2026-01-07 03:55:26.90407369 +0000 UTC m=+1373.469768425" lastFinishedPulling="2026-01-07 03:55:28.418951716 +0000 UTC m=+1374.984646451" observedRunningTime="2026-01-07 03:55:28.969138194 +0000 UTC m=+1375.534832929" watchObservedRunningTime="2026-01-07 03:55:28.977012878 +0000 UTC m=+1375.542707613" Jan 07 03:55:28 crc kubenswrapper[4980]: I0107 03:55:28.991522 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v" podStartSLOduration=2.443246648 podStartE2EDuration="2.991502417s" podCreationTimestamp="2026-01-07 03:55:26 +0000 UTC" firstStartedPulling="2026-01-07 03:55:27.969415863 +0000 UTC m=+1374.535110608" lastFinishedPulling="2026-01-07 03:55:28.517671622 +0000 UTC m=+1375.083366377" observedRunningTime="2026-01-07 03:55:28.986809732 +0000 UTC m=+1375.552504507" watchObservedRunningTime="2026-01-07 03:55:28.991502417 +0000 UTC m=+1375.557197172" Jan 07 03:55:31 crc kubenswrapper[4980]: I0107 03:55:31.729297 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2skrr" Jan 07 03:55:31 crc kubenswrapper[4980]: I0107 03:55:31.729518 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2skrr" Jan 07 03:55:31 crc kubenswrapper[4980]: I0107 03:55:31.993363 4980 generic.go:334] "Generic (PLEG): container finished" podID="3b42fafe-35e4-45a7-b3c9-95d8b9caa607" containerID="829d94e1e8ee19798916d3d55d47f444ad30abc7d95e03b565fb26e429cd9751" exitCode=0 Jan 07 03:55:31 crc kubenswrapper[4980]: I0107 03:55:31.993410 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v" event={"ID":"3b42fafe-35e4-45a7-b3c9-95d8b9caa607","Type":"ContainerDied","Data":"829d94e1e8ee19798916d3d55d47f444ad30abc7d95e03b565fb26e429cd9751"} Jan 07 03:55:32 crc kubenswrapper[4980]: I0107 03:55:32.599746 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 07 03:55:32 crc kubenswrapper[4980]: I0107 03:55:32.805042 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2skrr" podUID="927fc97c-ef21-4121-af6d-ab1d596e3df5" containerName="registry-server" probeResult="failure" output=< Jan 07 03:55:32 crc kubenswrapper[4980]: timeout: failed to connect service ":50051" within 1s Jan 07 03:55:32 crc kubenswrapper[4980]: > Jan 07 03:55:33 crc kubenswrapper[4980]: I0107 03:55:33.434805 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v" Jan 07 03:55:33 crc kubenswrapper[4980]: I0107 03:55:33.616046 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kstj\" (UniqueName: \"kubernetes.io/projected/3b42fafe-35e4-45a7-b3c9-95d8b9caa607-kube-api-access-9kstj\") pod \"3b42fafe-35e4-45a7-b3c9-95d8b9caa607\" (UID: \"3b42fafe-35e4-45a7-b3c9-95d8b9caa607\") " Jan 07 03:55:33 crc kubenswrapper[4980]: I0107 03:55:33.616113 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b42fafe-35e4-45a7-b3c9-95d8b9caa607-ssh-key-openstack-edpm-ipam\") pod \"3b42fafe-35e4-45a7-b3c9-95d8b9caa607\" (UID: \"3b42fafe-35e4-45a7-b3c9-95d8b9caa607\") " Jan 07 03:55:33 crc kubenswrapper[4980]: I0107 03:55:33.616211 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b42fafe-35e4-45a7-b3c9-95d8b9caa607-inventory\") pod \"3b42fafe-35e4-45a7-b3c9-95d8b9caa607\" (UID: \"3b42fafe-35e4-45a7-b3c9-95d8b9caa607\") " Jan 07 03:55:33 crc kubenswrapper[4980]: I0107 03:55:33.631782 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b42fafe-35e4-45a7-b3c9-95d8b9caa607-kube-api-access-9kstj" (OuterVolumeSpecName: "kube-api-access-9kstj") pod "3b42fafe-35e4-45a7-b3c9-95d8b9caa607" (UID: "3b42fafe-35e4-45a7-b3c9-95d8b9caa607"). InnerVolumeSpecName "kube-api-access-9kstj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:55:33 crc kubenswrapper[4980]: I0107 03:55:33.648377 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b42fafe-35e4-45a7-b3c9-95d8b9caa607-inventory" (OuterVolumeSpecName: "inventory") pod "3b42fafe-35e4-45a7-b3c9-95d8b9caa607" (UID: "3b42fafe-35e4-45a7-b3c9-95d8b9caa607"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:55:33 crc kubenswrapper[4980]: I0107 03:55:33.655056 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b42fafe-35e4-45a7-b3c9-95d8b9caa607-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b42fafe-35e4-45a7-b3c9-95d8b9caa607" (UID: "3b42fafe-35e4-45a7-b3c9-95d8b9caa607"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:55:33 crc kubenswrapper[4980]: I0107 03:55:33.718230 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kstj\" (UniqueName: \"kubernetes.io/projected/3b42fafe-35e4-45a7-b3c9-95d8b9caa607-kube-api-access-9kstj\") on node \"crc\" DevicePath \"\"" Jan 07 03:55:33 crc kubenswrapper[4980]: I0107 03:55:33.718272 4980 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b42fafe-35e4-45a7-b3c9-95d8b9caa607-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 07 03:55:33 crc kubenswrapper[4980]: I0107 03:55:33.718286 4980 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b42fafe-35e4-45a7-b3c9-95d8b9caa607-inventory\") on node \"crc\" DevicePath \"\"" Jan 07 03:55:33 crc kubenswrapper[4980]: I0107 03:55:33.945123 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6ff7l" Jan 07 03:55:33 crc kubenswrapper[4980]: I0107 03:55:33.945189 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6ff7l" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.022466 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v" event={"ID":"3b42fafe-35e4-45a7-b3c9-95d8b9caa607","Type":"ContainerDied","Data":"e5d2da0807691fca385dcd68233e2e4f705d2c8d751a0f38b37890447fe8c263"} Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.022545 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5d2da0807691fca385dcd68233e2e4f705d2c8d751a0f38b37890447fe8c263" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.022691 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-phv2v" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.033980 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6ff7l" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.100952 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6ff7l" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.207379 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf"] Jan 07 03:55:34 crc kubenswrapper[4980]: E0107 03:55:34.207845 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b42fafe-35e4-45a7-b3c9-95d8b9caa607" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.207864 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b42fafe-35e4-45a7-b3c9-95d8b9caa607" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.208059 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b42fafe-35e4-45a7-b3c9-95d8b9caa607" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.208704 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.211109 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.211172 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.211252 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.213124 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7ttf" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.233271 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf"] Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.240430 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf\" (UID: \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.240478 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf\" (UID: \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.240709 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf\" (UID: \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.240786 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6wtx\" (UniqueName: \"kubernetes.io/projected/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-kube-api-access-b6wtx\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf\" (UID: \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.342104 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6wtx\" (UniqueName: \"kubernetes.io/projected/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-kube-api-access-b6wtx\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf\" (UID: \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.342205 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf\" (UID: \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.342229 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf\" (UID: \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.342320 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf\" (UID: \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.349980 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf\" (UID: \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.350767 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf\" (UID: \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.350848 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf\" (UID: \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.364078 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6wtx\" (UniqueName: \"kubernetes.io/projected/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-kube-api-access-b6wtx\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf\" (UID: \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" Jan 07 03:55:34 crc kubenswrapper[4980]: I0107 03:55:34.533453 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" Jan 07 03:55:35 crc kubenswrapper[4980]: W0107 03:55:35.181076 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4dce042_2c6f_4a74_bbb3_84a79cfb02a1.slice/crio-649c15ffb7d43096cbdfcc59666680bf5a83ca9b5b48b547e7cdab0d01302054 WatchSource:0}: Error finding container 649c15ffb7d43096cbdfcc59666680bf5a83ca9b5b48b547e7cdab0d01302054: Status 404 returned error can't find the container with id 649c15ffb7d43096cbdfcc59666680bf5a83ca9b5b48b547e7cdab0d01302054 Jan 07 03:55:35 crc kubenswrapper[4980]: I0107 03:55:35.185879 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf"] Jan 07 03:55:36 crc kubenswrapper[4980]: I0107 03:55:36.045467 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" event={"ID":"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1","Type":"ContainerStarted","Data":"9857d021c645158000bd58dc8e43e4d2038fe50905addf550083de8660408e66"} Jan 07 03:55:36 crc kubenswrapper[4980]: I0107 03:55:36.046213 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" event={"ID":"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1","Type":"ContainerStarted","Data":"649c15ffb7d43096cbdfcc59666680bf5a83ca9b5b48b547e7cdab0d01302054"} Jan 07 03:55:36 crc kubenswrapper[4980]: I0107 03:55:36.065584 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" podStartSLOduration=1.5387717539999999 podStartE2EDuration="2.065534698s" podCreationTimestamp="2026-01-07 03:55:34 +0000 UTC" firstStartedPulling="2026-01-07 03:55:35.18343005 +0000 UTC m=+1381.749124775" lastFinishedPulling="2026-01-07 03:55:35.710192984 +0000 UTC m=+1382.275887719" observedRunningTime="2026-01-07 03:55:36.062307429 +0000 UTC m=+1382.628002184" watchObservedRunningTime="2026-01-07 03:55:36.065534698 +0000 UTC m=+1382.631229463" Jan 07 03:55:36 crc kubenswrapper[4980]: I0107 03:55:36.186598 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ff7l"] Jan 07 03:55:36 crc kubenswrapper[4980]: I0107 03:55:36.186988 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6ff7l" podUID="56d8822f-16a7-4b38-ad4c-459796d41c33" containerName="registry-server" containerID="cri-o://c810a9d2677652b4c7b5ce4648caef6b0fbb055e3f08e0f0cd16d58e2591840c" gracePeriod=2 Jan 07 03:55:36 crc kubenswrapper[4980]: I0107 03:55:36.783430 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6ff7l" Jan 07 03:55:36 crc kubenswrapper[4980]: I0107 03:55:36.895390 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56d8822f-16a7-4b38-ad4c-459796d41c33-catalog-content\") pod \"56d8822f-16a7-4b38-ad4c-459796d41c33\" (UID: \"56d8822f-16a7-4b38-ad4c-459796d41c33\") " Jan 07 03:55:36 crc kubenswrapper[4980]: I0107 03:55:36.895460 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56d8822f-16a7-4b38-ad4c-459796d41c33-utilities\") pod \"56d8822f-16a7-4b38-ad4c-459796d41c33\" (UID: \"56d8822f-16a7-4b38-ad4c-459796d41c33\") " Jan 07 03:55:36 crc kubenswrapper[4980]: I0107 03:55:36.895563 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qddjk\" (UniqueName: \"kubernetes.io/projected/56d8822f-16a7-4b38-ad4c-459796d41c33-kube-api-access-qddjk\") pod \"56d8822f-16a7-4b38-ad4c-459796d41c33\" (UID: \"56d8822f-16a7-4b38-ad4c-459796d41c33\") " Jan 07 03:55:36 crc kubenswrapper[4980]: I0107 03:55:36.897880 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56d8822f-16a7-4b38-ad4c-459796d41c33-utilities" (OuterVolumeSpecName: "utilities") pod "56d8822f-16a7-4b38-ad4c-459796d41c33" (UID: "56d8822f-16a7-4b38-ad4c-459796d41c33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:55:36 crc kubenswrapper[4980]: I0107 03:55:36.911263 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d8822f-16a7-4b38-ad4c-459796d41c33-kube-api-access-qddjk" (OuterVolumeSpecName: "kube-api-access-qddjk") pod "56d8822f-16a7-4b38-ad4c-459796d41c33" (UID: "56d8822f-16a7-4b38-ad4c-459796d41c33"). InnerVolumeSpecName "kube-api-access-qddjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:55:36 crc kubenswrapper[4980]: I0107 03:55:36.921762 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56d8822f-16a7-4b38-ad4c-459796d41c33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56d8822f-16a7-4b38-ad4c-459796d41c33" (UID: "56d8822f-16a7-4b38-ad4c-459796d41c33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:55:36 crc kubenswrapper[4980]: I0107 03:55:36.998538 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56d8822f-16a7-4b38-ad4c-459796d41c33-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:55:36 crc kubenswrapper[4980]: I0107 03:55:36.998597 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56d8822f-16a7-4b38-ad4c-459796d41c33-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:55:36 crc kubenswrapper[4980]: I0107 03:55:36.998612 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qddjk\" (UniqueName: \"kubernetes.io/projected/56d8822f-16a7-4b38-ad4c-459796d41c33-kube-api-access-qddjk\") on node \"crc\" DevicePath \"\"" Jan 07 03:55:37 crc kubenswrapper[4980]: I0107 03:55:37.065926 4980 generic.go:334] "Generic (PLEG): container finished" podID="56d8822f-16a7-4b38-ad4c-459796d41c33" containerID="c810a9d2677652b4c7b5ce4648caef6b0fbb055e3f08e0f0cd16d58e2591840c" exitCode=0 Jan 07 03:55:37 crc kubenswrapper[4980]: I0107 03:55:37.067321 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6ff7l" Jan 07 03:55:37 crc kubenswrapper[4980]: I0107 03:55:37.068781 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ff7l" event={"ID":"56d8822f-16a7-4b38-ad4c-459796d41c33","Type":"ContainerDied","Data":"c810a9d2677652b4c7b5ce4648caef6b0fbb055e3f08e0f0cd16d58e2591840c"} Jan 07 03:55:37 crc kubenswrapper[4980]: I0107 03:55:37.068861 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ff7l" event={"ID":"56d8822f-16a7-4b38-ad4c-459796d41c33","Type":"ContainerDied","Data":"c3d5ee27e4e71ea184caa7978250a782c9ec52d536f47ad1aa9f2ee6d4aaa42b"} Jan 07 03:55:37 crc kubenswrapper[4980]: I0107 03:55:37.068898 4980 scope.go:117] "RemoveContainer" containerID="c810a9d2677652b4c7b5ce4648caef6b0fbb055e3f08e0f0cd16d58e2591840c" Jan 07 03:55:37 crc kubenswrapper[4980]: I0107 03:55:37.111154 4980 scope.go:117] "RemoveContainer" containerID="c83c14064ee81106e39b1bbae3e5eece390c075b92790d871b0c33d9c0203c7a" Jan 07 03:55:37 crc kubenswrapper[4980]: I0107 03:55:37.133446 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ff7l"] Jan 07 03:55:37 crc kubenswrapper[4980]: I0107 03:55:37.147672 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ff7l"] Jan 07 03:55:37 crc kubenswrapper[4980]: I0107 03:55:37.152511 4980 scope.go:117] "RemoveContainer" containerID="97a6ff01b2359ce2f621d5682c59c252dbd5ae2f84426b8ae04f5580c2d52837" Jan 07 03:55:37 crc kubenswrapper[4980]: I0107 03:55:37.205243 4980 scope.go:117] "RemoveContainer" containerID="c810a9d2677652b4c7b5ce4648caef6b0fbb055e3f08e0f0cd16d58e2591840c" Jan 07 03:55:37 crc kubenswrapper[4980]: E0107 03:55:37.206084 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c810a9d2677652b4c7b5ce4648caef6b0fbb055e3f08e0f0cd16d58e2591840c\": container with ID starting with c810a9d2677652b4c7b5ce4648caef6b0fbb055e3f08e0f0cd16d58e2591840c not found: ID does not exist" containerID="c810a9d2677652b4c7b5ce4648caef6b0fbb055e3f08e0f0cd16d58e2591840c" Jan 07 03:55:37 crc kubenswrapper[4980]: I0107 03:55:37.206142 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c810a9d2677652b4c7b5ce4648caef6b0fbb055e3f08e0f0cd16d58e2591840c"} err="failed to get container status \"c810a9d2677652b4c7b5ce4648caef6b0fbb055e3f08e0f0cd16d58e2591840c\": rpc error: code = NotFound desc = could not find container \"c810a9d2677652b4c7b5ce4648caef6b0fbb055e3f08e0f0cd16d58e2591840c\": container with ID starting with c810a9d2677652b4c7b5ce4648caef6b0fbb055e3f08e0f0cd16d58e2591840c not found: ID does not exist" Jan 07 03:55:37 crc kubenswrapper[4980]: I0107 03:55:37.206183 4980 scope.go:117] "RemoveContainer" containerID="c83c14064ee81106e39b1bbae3e5eece390c075b92790d871b0c33d9c0203c7a" Jan 07 03:55:37 crc kubenswrapper[4980]: E0107 03:55:37.206648 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c83c14064ee81106e39b1bbae3e5eece390c075b92790d871b0c33d9c0203c7a\": container with ID starting with c83c14064ee81106e39b1bbae3e5eece390c075b92790d871b0c33d9c0203c7a not found: ID does not exist" containerID="c83c14064ee81106e39b1bbae3e5eece390c075b92790d871b0c33d9c0203c7a" Jan 07 03:55:37 crc kubenswrapper[4980]: I0107 03:55:37.206683 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c83c14064ee81106e39b1bbae3e5eece390c075b92790d871b0c33d9c0203c7a"} err="failed to get container status \"c83c14064ee81106e39b1bbae3e5eece390c075b92790d871b0c33d9c0203c7a\": rpc error: code = NotFound desc = could not find container \"c83c14064ee81106e39b1bbae3e5eece390c075b92790d871b0c33d9c0203c7a\": container with ID starting with c83c14064ee81106e39b1bbae3e5eece390c075b92790d871b0c33d9c0203c7a not found: ID does not exist" Jan 07 03:55:37 crc kubenswrapper[4980]: I0107 03:55:37.206697 4980 scope.go:117] "RemoveContainer" containerID="97a6ff01b2359ce2f621d5682c59c252dbd5ae2f84426b8ae04f5580c2d52837" Jan 07 03:55:37 crc kubenswrapper[4980]: E0107 03:55:37.206990 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97a6ff01b2359ce2f621d5682c59c252dbd5ae2f84426b8ae04f5580c2d52837\": container with ID starting with 97a6ff01b2359ce2f621d5682c59c252dbd5ae2f84426b8ae04f5580c2d52837 not found: ID does not exist" containerID="97a6ff01b2359ce2f621d5682c59c252dbd5ae2f84426b8ae04f5580c2d52837" Jan 07 03:55:37 crc kubenswrapper[4980]: I0107 03:55:37.207026 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97a6ff01b2359ce2f621d5682c59c252dbd5ae2f84426b8ae04f5580c2d52837"} err="failed to get container status \"97a6ff01b2359ce2f621d5682c59c252dbd5ae2f84426b8ae04f5580c2d52837\": rpc error: code = NotFound desc = could not find container \"97a6ff01b2359ce2f621d5682c59c252dbd5ae2f84426b8ae04f5580c2d52837\": container with ID starting with 97a6ff01b2359ce2f621d5682c59c252dbd5ae2f84426b8ae04f5580c2d52837 not found: ID does not exist" Jan 07 03:55:37 crc kubenswrapper[4980]: I0107 03:55:37.756921 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d8822f-16a7-4b38-ad4c-459796d41c33" path="/var/lib/kubelet/pods/56d8822f-16a7-4b38-ad4c-459796d41c33/volumes" Jan 07 03:55:38 crc kubenswrapper[4980]: I0107 03:55:38.931078 4980 scope.go:117] "RemoveContainer" containerID="ecbe432cc3de5ef2366a992d17d5ea48d3dc8f104563979f64f98a2a80743205" Jan 07 03:55:41 crc kubenswrapper[4980]: I0107 03:55:41.819494 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2skrr" Jan 07 03:55:41 crc kubenswrapper[4980]: I0107 03:55:41.904168 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2skrr" Jan 07 03:55:42 crc kubenswrapper[4980]: I0107 03:55:42.066222 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2skrr"] Jan 07 03:55:43 crc kubenswrapper[4980]: I0107 03:55:43.142603 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2skrr" podUID="927fc97c-ef21-4121-af6d-ab1d596e3df5" containerName="registry-server" containerID="cri-o://91ab7ec49d9a1c3fa242633b974037941d160e9f99a9d951db106cea6aa6eb65" gracePeriod=2 Jan 07 03:55:43 crc kubenswrapper[4980]: I0107 03:55:43.663114 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2skrr" Jan 07 03:55:43 crc kubenswrapper[4980]: I0107 03:55:43.736822 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/927fc97c-ef21-4121-af6d-ab1d596e3df5-utilities\") pod \"927fc97c-ef21-4121-af6d-ab1d596e3df5\" (UID: \"927fc97c-ef21-4121-af6d-ab1d596e3df5\") " Jan 07 03:55:43 crc kubenswrapper[4980]: I0107 03:55:43.737131 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/927fc97c-ef21-4121-af6d-ab1d596e3df5-catalog-content\") pod \"927fc97c-ef21-4121-af6d-ab1d596e3df5\" (UID: \"927fc97c-ef21-4121-af6d-ab1d596e3df5\") " Jan 07 03:55:43 crc kubenswrapper[4980]: I0107 03:55:43.737265 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv6sm\" (UniqueName: \"kubernetes.io/projected/927fc97c-ef21-4121-af6d-ab1d596e3df5-kube-api-access-sv6sm\") pod \"927fc97c-ef21-4121-af6d-ab1d596e3df5\" (UID: \"927fc97c-ef21-4121-af6d-ab1d596e3df5\") " Jan 07 03:55:43 crc kubenswrapper[4980]: I0107 03:55:43.738017 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/927fc97c-ef21-4121-af6d-ab1d596e3df5-utilities" (OuterVolumeSpecName: "utilities") pod "927fc97c-ef21-4121-af6d-ab1d596e3df5" (UID: "927fc97c-ef21-4121-af6d-ab1d596e3df5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:55:43 crc kubenswrapper[4980]: I0107 03:55:43.746990 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927fc97c-ef21-4121-af6d-ab1d596e3df5-kube-api-access-sv6sm" (OuterVolumeSpecName: "kube-api-access-sv6sm") pod "927fc97c-ef21-4121-af6d-ab1d596e3df5" (UID: "927fc97c-ef21-4121-af6d-ab1d596e3df5"). InnerVolumeSpecName "kube-api-access-sv6sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:55:43 crc kubenswrapper[4980]: I0107 03:55:43.839420 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/927fc97c-ef21-4121-af6d-ab1d596e3df5-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:55:43 crc kubenswrapper[4980]: I0107 03:55:43.839456 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv6sm\" (UniqueName: \"kubernetes.io/projected/927fc97c-ef21-4121-af6d-ab1d596e3df5-kube-api-access-sv6sm\") on node \"crc\" DevicePath \"\"" Jan 07 03:55:43 crc kubenswrapper[4980]: I0107 03:55:43.871562 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/927fc97c-ef21-4121-af6d-ab1d596e3df5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "927fc97c-ef21-4121-af6d-ab1d596e3df5" (UID: "927fc97c-ef21-4121-af6d-ab1d596e3df5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:55:43 crc kubenswrapper[4980]: I0107 03:55:43.941222 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/927fc97c-ef21-4121-af6d-ab1d596e3df5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:55:44 crc kubenswrapper[4980]: I0107 03:55:44.160915 4980 generic.go:334] "Generic (PLEG): container finished" podID="927fc97c-ef21-4121-af6d-ab1d596e3df5" containerID="91ab7ec49d9a1c3fa242633b974037941d160e9f99a9d951db106cea6aa6eb65" exitCode=0 Jan 07 03:55:44 crc kubenswrapper[4980]: I0107 03:55:44.160964 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2skrr" event={"ID":"927fc97c-ef21-4121-af6d-ab1d596e3df5","Type":"ContainerDied","Data":"91ab7ec49d9a1c3fa242633b974037941d160e9f99a9d951db106cea6aa6eb65"} Jan 07 03:55:44 crc kubenswrapper[4980]: I0107 03:55:44.160983 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2skrr" Jan 07 03:55:44 crc kubenswrapper[4980]: I0107 03:55:44.161007 4980 scope.go:117] "RemoveContainer" containerID="91ab7ec49d9a1c3fa242633b974037941d160e9f99a9d951db106cea6aa6eb65" Jan 07 03:55:44 crc kubenswrapper[4980]: I0107 03:55:44.160994 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2skrr" event={"ID":"927fc97c-ef21-4121-af6d-ab1d596e3df5","Type":"ContainerDied","Data":"5ed7626e56a53e2ee23005d763e3dcf1b3f5dc522939768560034f4c88ffd1a1"} Jan 07 03:55:44 crc kubenswrapper[4980]: I0107 03:55:44.206617 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2skrr"] Jan 07 03:55:44 crc kubenswrapper[4980]: I0107 03:55:44.211028 4980 scope.go:117] "RemoveContainer" containerID="971de35a14aa59bb652c825ec7aa3b871749ddb4292293039fce4073791824d5" Jan 07 03:55:44 crc kubenswrapper[4980]: I0107 03:55:44.216980 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2skrr"] Jan 07 03:55:44 crc kubenswrapper[4980]: I0107 03:55:44.257655 4980 scope.go:117] "RemoveContainer" containerID="2b983da1dc974d5db8119c62e9a27922eb2e82e6b294351ce8231e52a1e54bbc" Jan 07 03:55:44 crc kubenswrapper[4980]: I0107 03:55:44.323207 4980 scope.go:117] "RemoveContainer" containerID="91ab7ec49d9a1c3fa242633b974037941d160e9f99a9d951db106cea6aa6eb65" Jan 07 03:55:44 crc kubenswrapper[4980]: E0107 03:55:44.323819 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91ab7ec49d9a1c3fa242633b974037941d160e9f99a9d951db106cea6aa6eb65\": container with ID starting with 91ab7ec49d9a1c3fa242633b974037941d160e9f99a9d951db106cea6aa6eb65 not found: ID does not exist" containerID="91ab7ec49d9a1c3fa242633b974037941d160e9f99a9d951db106cea6aa6eb65" Jan 07 03:55:44 crc kubenswrapper[4980]: I0107 03:55:44.323859 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91ab7ec49d9a1c3fa242633b974037941d160e9f99a9d951db106cea6aa6eb65"} err="failed to get container status \"91ab7ec49d9a1c3fa242633b974037941d160e9f99a9d951db106cea6aa6eb65\": rpc error: code = NotFound desc = could not find container \"91ab7ec49d9a1c3fa242633b974037941d160e9f99a9d951db106cea6aa6eb65\": container with ID starting with 91ab7ec49d9a1c3fa242633b974037941d160e9f99a9d951db106cea6aa6eb65 not found: ID does not exist" Jan 07 03:55:44 crc kubenswrapper[4980]: I0107 03:55:44.323889 4980 scope.go:117] "RemoveContainer" containerID="971de35a14aa59bb652c825ec7aa3b871749ddb4292293039fce4073791824d5" Jan 07 03:55:44 crc kubenswrapper[4980]: E0107 03:55:44.324297 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"971de35a14aa59bb652c825ec7aa3b871749ddb4292293039fce4073791824d5\": container with ID starting with 971de35a14aa59bb652c825ec7aa3b871749ddb4292293039fce4073791824d5 not found: ID does not exist" containerID="971de35a14aa59bb652c825ec7aa3b871749ddb4292293039fce4073791824d5" Jan 07 03:55:44 crc kubenswrapper[4980]: I0107 03:55:44.324366 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"971de35a14aa59bb652c825ec7aa3b871749ddb4292293039fce4073791824d5"} err="failed to get container status \"971de35a14aa59bb652c825ec7aa3b871749ddb4292293039fce4073791824d5\": rpc error: code = NotFound desc = could not find container \"971de35a14aa59bb652c825ec7aa3b871749ddb4292293039fce4073791824d5\": container with ID starting with 971de35a14aa59bb652c825ec7aa3b871749ddb4292293039fce4073791824d5 not found: ID does not exist" Jan 07 03:55:44 crc kubenswrapper[4980]: I0107 03:55:44.324408 4980 scope.go:117] "RemoveContainer" containerID="2b983da1dc974d5db8119c62e9a27922eb2e82e6b294351ce8231e52a1e54bbc" Jan 07 03:55:44 crc kubenswrapper[4980]: E0107 03:55:44.324786 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b983da1dc974d5db8119c62e9a27922eb2e82e6b294351ce8231e52a1e54bbc\": container with ID starting with 2b983da1dc974d5db8119c62e9a27922eb2e82e6b294351ce8231e52a1e54bbc not found: ID does not exist" containerID="2b983da1dc974d5db8119c62e9a27922eb2e82e6b294351ce8231e52a1e54bbc" Jan 07 03:55:44 crc kubenswrapper[4980]: I0107 03:55:44.324822 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b983da1dc974d5db8119c62e9a27922eb2e82e6b294351ce8231e52a1e54bbc"} err="failed to get container status \"2b983da1dc974d5db8119c62e9a27922eb2e82e6b294351ce8231e52a1e54bbc\": rpc error: code = NotFound desc = could not find container \"2b983da1dc974d5db8119c62e9a27922eb2e82e6b294351ce8231e52a1e54bbc\": container with ID starting with 2b983da1dc974d5db8119c62e9a27922eb2e82e6b294351ce8231e52a1e54bbc not found: ID does not exist" Jan 07 03:55:45 crc kubenswrapper[4980]: I0107 03:55:45.746470 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="927fc97c-ef21-4121-af6d-ab1d596e3df5" path="/var/lib/kubelet/pods/927fc97c-ef21-4121-af6d-ab1d596e3df5/volumes" Jan 07 03:56:36 crc kubenswrapper[4980]: I0107 03:56:36.543690 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:56:36 crc kubenswrapper[4980]: I0107 03:56:36.544844 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:56:39 crc kubenswrapper[4980]: I0107 03:56:39.046305 4980 scope.go:117] "RemoveContainer" containerID="a2369d40f6bf481bb3301fab5e969e1e3bfbe5c0c42747c89f8d42fc32114e25" Jan 07 03:56:39 crc kubenswrapper[4980]: I0107 03:56:39.128799 4980 scope.go:117] "RemoveContainer" containerID="a62ea589814670e969aec9e78c97c580b8d1538f3796411cbe160a4044c9258d" Jan 07 03:56:39 crc kubenswrapper[4980]: I0107 03:56:39.182362 4980 scope.go:117] "RemoveContainer" containerID="5e36cba9c6cdfc2b8d1c15aa812ced9691350acef7022fe9c126099890c5a3a4" Jan 07 03:56:39 crc kubenswrapper[4980]: I0107 03:56:39.218644 4980 scope.go:117] "RemoveContainer" containerID="e4e730d9f176a8a9e306c231483146f60cc1a8d163953cdc4fb8c3733f3cbde5" Jan 07 03:57:06 crc kubenswrapper[4980]: I0107 03:57:06.543035 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:57:06 crc kubenswrapper[4980]: I0107 03:57:06.543711 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:57:36 crc kubenswrapper[4980]: I0107 03:57:36.543099 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 03:57:36 crc kubenswrapper[4980]: I0107 03:57:36.543610 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 03:57:36 crc kubenswrapper[4980]: I0107 03:57:36.543659 4980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 03:57:36 crc kubenswrapper[4980]: I0107 03:57:36.544402 4980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862"} pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 03:57:36 crc kubenswrapper[4980]: I0107 03:57:36.544458 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" containerID="cri-o://890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" gracePeriod=600 Jan 07 03:57:36 crc kubenswrapper[4980]: E0107 03:57:36.702540 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 03:57:37 crc kubenswrapper[4980]: I0107 03:57:37.520927 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" exitCode=0 Jan 07 03:57:37 crc kubenswrapper[4980]: I0107 03:57:37.521151 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerDied","Data":"890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862"} Jan 07 03:57:37 crc kubenswrapper[4980]: I0107 03:57:37.521349 4980 scope.go:117] "RemoveContainer" containerID="9509c26a45d342c7045d51ecdcc5dc207e383364d36ad4c45d95a6527388485b" Jan 07 03:57:37 crc kubenswrapper[4980]: I0107 03:57:37.521886 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 03:57:37 crc kubenswrapper[4980]: E0107 03:57:37.522247 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 03:57:50 crc kubenswrapper[4980]: I0107 03:57:50.736686 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 03:57:50 crc kubenswrapper[4980]: E0107 03:57:50.737757 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.458523 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-brq65"] Jan 07 03:57:58 crc kubenswrapper[4980]: E0107 03:57:58.469350 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927fc97c-ef21-4121-af6d-ab1d596e3df5" containerName="extract-utilities" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.469372 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="927fc97c-ef21-4121-af6d-ab1d596e3df5" containerName="extract-utilities" Jan 07 03:57:58 crc kubenswrapper[4980]: E0107 03:57:58.469393 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d8822f-16a7-4b38-ad4c-459796d41c33" containerName="extract-utilities" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.469399 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d8822f-16a7-4b38-ad4c-459796d41c33" containerName="extract-utilities" Jan 07 03:57:58 crc kubenswrapper[4980]: E0107 03:57:58.469414 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d8822f-16a7-4b38-ad4c-459796d41c33" containerName="registry-server" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.469420 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d8822f-16a7-4b38-ad4c-459796d41c33" containerName="registry-server" Jan 07 03:57:58 crc kubenswrapper[4980]: E0107 03:57:58.469431 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927fc97c-ef21-4121-af6d-ab1d596e3df5" containerName="extract-content" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.469436 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="927fc97c-ef21-4121-af6d-ab1d596e3df5" containerName="extract-content" Jan 07 03:57:58 crc kubenswrapper[4980]: E0107 03:57:58.469448 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927fc97c-ef21-4121-af6d-ab1d596e3df5" containerName="registry-server" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.469454 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="927fc97c-ef21-4121-af6d-ab1d596e3df5" containerName="registry-server" Jan 07 03:57:58 crc kubenswrapper[4980]: E0107 03:57:58.469469 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d8822f-16a7-4b38-ad4c-459796d41c33" containerName="extract-content" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.469474 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d8822f-16a7-4b38-ad4c-459796d41c33" containerName="extract-content" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.469701 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="927fc97c-ef21-4121-af6d-ab1d596e3df5" containerName="registry-server" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.469721 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d8822f-16a7-4b38-ad4c-459796d41c33" containerName="registry-server" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.470979 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-brq65"] Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.471085 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-brq65" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.549339 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdk46\" (UniqueName: \"kubernetes.io/projected/97db042a-c80c-464e-a960-6bfee98bcf94-kube-api-access-pdk46\") pod \"certified-operators-brq65\" (UID: \"97db042a-c80c-464e-a960-6bfee98bcf94\") " pod="openshift-marketplace/certified-operators-brq65" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.549631 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97db042a-c80c-464e-a960-6bfee98bcf94-catalog-content\") pod \"certified-operators-brq65\" (UID: \"97db042a-c80c-464e-a960-6bfee98bcf94\") " pod="openshift-marketplace/certified-operators-brq65" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.549878 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97db042a-c80c-464e-a960-6bfee98bcf94-utilities\") pod \"certified-operators-brq65\" (UID: \"97db042a-c80c-464e-a960-6bfee98bcf94\") " pod="openshift-marketplace/certified-operators-brq65" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.651188 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97db042a-c80c-464e-a960-6bfee98bcf94-catalog-content\") pod \"certified-operators-brq65\" (UID: \"97db042a-c80c-464e-a960-6bfee98bcf94\") " pod="openshift-marketplace/certified-operators-brq65" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.651522 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97db042a-c80c-464e-a960-6bfee98bcf94-utilities\") pod \"certified-operators-brq65\" (UID: \"97db042a-c80c-464e-a960-6bfee98bcf94\") " pod="openshift-marketplace/certified-operators-brq65" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.651606 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdk46\" (UniqueName: \"kubernetes.io/projected/97db042a-c80c-464e-a960-6bfee98bcf94-kube-api-access-pdk46\") pod \"certified-operators-brq65\" (UID: \"97db042a-c80c-464e-a960-6bfee98bcf94\") " pod="openshift-marketplace/certified-operators-brq65" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.651827 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97db042a-c80c-464e-a960-6bfee98bcf94-catalog-content\") pod \"certified-operators-brq65\" (UID: \"97db042a-c80c-464e-a960-6bfee98bcf94\") " pod="openshift-marketplace/certified-operators-brq65" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.651852 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97db042a-c80c-464e-a960-6bfee98bcf94-utilities\") pod \"certified-operators-brq65\" (UID: \"97db042a-c80c-464e-a960-6bfee98bcf94\") " pod="openshift-marketplace/certified-operators-brq65" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.676080 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdk46\" (UniqueName: \"kubernetes.io/projected/97db042a-c80c-464e-a960-6bfee98bcf94-kube-api-access-pdk46\") pod \"certified-operators-brq65\" (UID: \"97db042a-c80c-464e-a960-6bfee98bcf94\") " pod="openshift-marketplace/certified-operators-brq65" Jan 07 03:57:58 crc kubenswrapper[4980]: I0107 03:57:58.795834 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-brq65" Jan 07 03:57:59 crc kubenswrapper[4980]: I0107 03:57:59.303217 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-brq65"] Jan 07 03:57:59 crc kubenswrapper[4980]: I0107 03:57:59.795130 4980 generic.go:334] "Generic (PLEG): container finished" podID="97db042a-c80c-464e-a960-6bfee98bcf94" containerID="bb9c9822d86389c3deaff2cd46b0e55c044f81a06715d5f6acb305d376c2e777" exitCode=0 Jan 07 03:57:59 crc kubenswrapper[4980]: I0107 03:57:59.795249 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brq65" event={"ID":"97db042a-c80c-464e-a960-6bfee98bcf94","Type":"ContainerDied","Data":"bb9c9822d86389c3deaff2cd46b0e55c044f81a06715d5f6acb305d376c2e777"} Jan 07 03:57:59 crc kubenswrapper[4980]: I0107 03:57:59.795459 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brq65" event={"ID":"97db042a-c80c-464e-a960-6bfee98bcf94","Type":"ContainerStarted","Data":"d6aaf2638f8c9eae13f3419e9f2544a59f2556995995ee5974e6f6f9e1ff38a2"} Jan 07 03:58:00 crc kubenswrapper[4980]: I0107 03:58:00.814162 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brq65" event={"ID":"97db042a-c80c-464e-a960-6bfee98bcf94","Type":"ContainerStarted","Data":"9a9f845408ac9872a485764aa675d97aba41249d626b4a7c399a84b274a2ea98"} Jan 07 03:58:01 crc kubenswrapper[4980]: I0107 03:58:01.735972 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 03:58:01 crc kubenswrapper[4980]: E0107 03:58:01.736521 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 03:58:01 crc kubenswrapper[4980]: I0107 03:58:01.829941 4980 generic.go:334] "Generic (PLEG): container finished" podID="97db042a-c80c-464e-a960-6bfee98bcf94" containerID="9a9f845408ac9872a485764aa675d97aba41249d626b4a7c399a84b274a2ea98" exitCode=0 Jan 07 03:58:01 crc kubenswrapper[4980]: I0107 03:58:01.830012 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brq65" event={"ID":"97db042a-c80c-464e-a960-6bfee98bcf94","Type":"ContainerDied","Data":"9a9f845408ac9872a485764aa675d97aba41249d626b4a7c399a84b274a2ea98"} Jan 07 03:58:02 crc kubenswrapper[4980]: I0107 03:58:02.855248 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brq65" event={"ID":"97db042a-c80c-464e-a960-6bfee98bcf94","Type":"ContainerStarted","Data":"40ed810b0c08a8d30dfc936515dee28106a95e1b5fd18c5f643fe2d97fb6c0c9"} Jan 07 03:58:02 crc kubenswrapper[4980]: I0107 03:58:02.896189 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-brq65" podStartSLOduration=2.373620089 podStartE2EDuration="4.896170316s" podCreationTimestamp="2026-01-07 03:57:58 +0000 UTC" firstStartedPulling="2026-01-07 03:57:59.796860609 +0000 UTC m=+1526.362555344" lastFinishedPulling="2026-01-07 03:58:02.319410826 +0000 UTC m=+1528.885105571" observedRunningTime="2026-01-07 03:58:02.879683266 +0000 UTC m=+1529.445378031" watchObservedRunningTime="2026-01-07 03:58:02.896170316 +0000 UTC m=+1529.461865061" Jan 07 03:58:08 crc kubenswrapper[4980]: I0107 03:58:08.796774 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-brq65" Jan 07 03:58:08 crc kubenswrapper[4980]: I0107 03:58:08.797435 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-brq65" Jan 07 03:58:08 crc kubenswrapper[4980]: I0107 03:58:08.873272 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-brq65" Jan 07 03:58:09 crc kubenswrapper[4980]: I0107 03:58:09.007680 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-brq65" Jan 07 03:58:09 crc kubenswrapper[4980]: I0107 03:58:09.122891 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-brq65"] Jan 07 03:58:10 crc kubenswrapper[4980]: I0107 03:58:10.954627 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-brq65" podUID="97db042a-c80c-464e-a960-6bfee98bcf94" containerName="registry-server" containerID="cri-o://40ed810b0c08a8d30dfc936515dee28106a95e1b5fd18c5f643fe2d97fb6c0c9" gracePeriod=2 Jan 07 03:58:11 crc kubenswrapper[4980]: I0107 03:58:11.980689 4980 generic.go:334] "Generic (PLEG): container finished" podID="97db042a-c80c-464e-a960-6bfee98bcf94" containerID="40ed810b0c08a8d30dfc936515dee28106a95e1b5fd18c5f643fe2d97fb6c0c9" exitCode=0 Jan 07 03:58:11 crc kubenswrapper[4980]: I0107 03:58:11.980776 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brq65" event={"ID":"97db042a-c80c-464e-a960-6bfee98bcf94","Type":"ContainerDied","Data":"40ed810b0c08a8d30dfc936515dee28106a95e1b5fd18c5f643fe2d97fb6c0c9"} Jan 07 03:58:12 crc kubenswrapper[4980]: I0107 03:58:12.569170 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-brq65" Jan 07 03:58:12 crc kubenswrapper[4980]: I0107 03:58:12.689668 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdk46\" (UniqueName: \"kubernetes.io/projected/97db042a-c80c-464e-a960-6bfee98bcf94-kube-api-access-pdk46\") pod \"97db042a-c80c-464e-a960-6bfee98bcf94\" (UID: \"97db042a-c80c-464e-a960-6bfee98bcf94\") " Jan 07 03:58:12 crc kubenswrapper[4980]: I0107 03:58:12.689857 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97db042a-c80c-464e-a960-6bfee98bcf94-catalog-content\") pod \"97db042a-c80c-464e-a960-6bfee98bcf94\" (UID: \"97db042a-c80c-464e-a960-6bfee98bcf94\") " Jan 07 03:58:12 crc kubenswrapper[4980]: I0107 03:58:12.689974 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97db042a-c80c-464e-a960-6bfee98bcf94-utilities\") pod \"97db042a-c80c-464e-a960-6bfee98bcf94\" (UID: \"97db042a-c80c-464e-a960-6bfee98bcf94\") " Jan 07 03:58:12 crc kubenswrapper[4980]: I0107 03:58:12.691255 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97db042a-c80c-464e-a960-6bfee98bcf94-utilities" (OuterVolumeSpecName: "utilities") pod "97db042a-c80c-464e-a960-6bfee98bcf94" (UID: "97db042a-c80c-464e-a960-6bfee98bcf94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:58:12 crc kubenswrapper[4980]: I0107 03:58:12.706671 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97db042a-c80c-464e-a960-6bfee98bcf94-kube-api-access-pdk46" (OuterVolumeSpecName: "kube-api-access-pdk46") pod "97db042a-c80c-464e-a960-6bfee98bcf94" (UID: "97db042a-c80c-464e-a960-6bfee98bcf94"). InnerVolumeSpecName "kube-api-access-pdk46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:58:12 crc kubenswrapper[4980]: I0107 03:58:12.762487 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97db042a-c80c-464e-a960-6bfee98bcf94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97db042a-c80c-464e-a960-6bfee98bcf94" (UID: "97db042a-c80c-464e-a960-6bfee98bcf94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:58:12 crc kubenswrapper[4980]: I0107 03:58:12.792093 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97db042a-c80c-464e-a960-6bfee98bcf94-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:58:12 crc kubenswrapper[4980]: I0107 03:58:12.792145 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdk46\" (UniqueName: \"kubernetes.io/projected/97db042a-c80c-464e-a960-6bfee98bcf94-kube-api-access-pdk46\") on node \"crc\" DevicePath \"\"" Jan 07 03:58:12 crc kubenswrapper[4980]: I0107 03:58:12.792170 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97db042a-c80c-464e-a960-6bfee98bcf94-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:58:12 crc kubenswrapper[4980]: I0107 03:58:12.999058 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brq65" event={"ID":"97db042a-c80c-464e-a960-6bfee98bcf94","Type":"ContainerDied","Data":"d6aaf2638f8c9eae13f3419e9f2544a59f2556995995ee5974e6f6f9e1ff38a2"} Jan 07 03:58:12 crc kubenswrapper[4980]: I0107 03:58:12.999135 4980 scope.go:117] "RemoveContainer" containerID="40ed810b0c08a8d30dfc936515dee28106a95e1b5fd18c5f643fe2d97fb6c0c9" Jan 07 03:58:13 crc kubenswrapper[4980]: I0107 03:58:12.999204 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-brq65" Jan 07 03:58:13 crc kubenswrapper[4980]: I0107 03:58:13.027331 4980 scope.go:117] "RemoveContainer" containerID="9a9f845408ac9872a485764aa675d97aba41249d626b4a7c399a84b274a2ea98" Jan 07 03:58:13 crc kubenswrapper[4980]: I0107 03:58:13.078425 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-brq65"] Jan 07 03:58:13 crc kubenswrapper[4980]: I0107 03:58:13.080015 4980 scope.go:117] "RemoveContainer" containerID="bb9c9822d86389c3deaff2cd46b0e55c044f81a06715d5f6acb305d376c2e777" Jan 07 03:58:13 crc kubenswrapper[4980]: I0107 03:58:13.088698 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-brq65"] Jan 07 03:58:13 crc kubenswrapper[4980]: I0107 03:58:13.747923 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 03:58:13 crc kubenswrapper[4980]: E0107 03:58:13.748512 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 03:58:13 crc kubenswrapper[4980]: I0107 03:58:13.751591 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97db042a-c80c-464e-a960-6bfee98bcf94" path="/var/lib/kubelet/pods/97db042a-c80c-464e-a960-6bfee98bcf94/volumes" Jan 07 03:58:25 crc kubenswrapper[4980]: I0107 03:58:25.736394 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 03:58:25 crc kubenswrapper[4980]: E0107 03:58:25.737657 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 03:58:39 crc kubenswrapper[4980]: I0107 03:58:39.374044 4980 scope.go:117] "RemoveContainer" containerID="46bf3ccf201c786e81ef65b74b15f50eabd1becb918424bd98329974a087f9ce" Jan 07 03:58:39 crc kubenswrapper[4980]: I0107 03:58:39.403163 4980 scope.go:117] "RemoveContainer" containerID="d59c2e0f541ef6199bf8b88f34fc752de1c5eb9c041461be70ff1be5cb276666" Jan 07 03:58:39 crc kubenswrapper[4980]: I0107 03:58:39.444654 4980 scope.go:117] "RemoveContainer" containerID="e4629353bf5f5fa1562af2db98c989b1e848336d906925ffcd0e6a1d5b5ec7ae" Jan 07 03:58:39 crc kubenswrapper[4980]: I0107 03:58:39.483486 4980 scope.go:117] "RemoveContainer" containerID="3ed8266c2272f3bb7b2810804caf80d22867fbe9efa4d9d0beaedc1151883b5d" Jan 07 03:58:40 crc kubenswrapper[4980]: I0107 03:58:40.736002 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 03:58:40 crc kubenswrapper[4980]: E0107 03:58:40.736277 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 03:58:48 crc kubenswrapper[4980]: I0107 03:58:48.503386 4980 generic.go:334] "Generic (PLEG): container finished" podID="b4dce042-2c6f-4a74-bbb3-84a79cfb02a1" containerID="9857d021c645158000bd58dc8e43e4d2038fe50905addf550083de8660408e66" exitCode=0 Jan 07 03:58:48 crc kubenswrapper[4980]: I0107 03:58:48.513826 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" event={"ID":"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1","Type":"ContainerDied","Data":"9857d021c645158000bd58dc8e43e4d2038fe50905addf550083de8660408e66"} Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.109425 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.154942 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-inventory\") pod \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\" (UID: \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\") " Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.155162 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-ssh-key-openstack-edpm-ipam\") pod \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\" (UID: \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\") " Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.155408 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-bootstrap-combined-ca-bundle\") pod \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\" (UID: \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\") " Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.155542 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6wtx\" (UniqueName: \"kubernetes.io/projected/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-kube-api-access-b6wtx\") pod \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\" (UID: \"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1\") " Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.164137 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-kube-api-access-b6wtx" (OuterVolumeSpecName: "kube-api-access-b6wtx") pod "b4dce042-2c6f-4a74-bbb3-84a79cfb02a1" (UID: "b4dce042-2c6f-4a74-bbb3-84a79cfb02a1"). InnerVolumeSpecName "kube-api-access-b6wtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.164526 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b4dce042-2c6f-4a74-bbb3-84a79cfb02a1" (UID: "b4dce042-2c6f-4a74-bbb3-84a79cfb02a1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.188785 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b4dce042-2c6f-4a74-bbb3-84a79cfb02a1" (UID: "b4dce042-2c6f-4a74-bbb3-84a79cfb02a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.211693 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-inventory" (OuterVolumeSpecName: "inventory") pod "b4dce042-2c6f-4a74-bbb3-84a79cfb02a1" (UID: "b4dce042-2c6f-4a74-bbb3-84a79cfb02a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.258392 4980 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.258447 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6wtx\" (UniqueName: \"kubernetes.io/projected/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-kube-api-access-b6wtx\") on node \"crc\" DevicePath \"\"" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.258469 4980 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-inventory\") on node \"crc\" DevicePath \"\"" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.258487 4980 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4dce042-2c6f-4a74-bbb3-84a79cfb02a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.530684 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" event={"ID":"b4dce042-2c6f-4a74-bbb3-84a79cfb02a1","Type":"ContainerDied","Data":"649c15ffb7d43096cbdfcc59666680bf5a83ca9b5b48b547e7cdab0d01302054"} Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.530733 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="649c15ffb7d43096cbdfcc59666680bf5a83ca9b5b48b547e7cdab0d01302054" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.530742 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.651529 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp"] Jan 07 03:58:50 crc kubenswrapper[4980]: E0107 03:58:50.651965 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97db042a-c80c-464e-a960-6bfee98bcf94" containerName="extract-content" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.651997 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="97db042a-c80c-464e-a960-6bfee98bcf94" containerName="extract-content" Jan 07 03:58:50 crc kubenswrapper[4980]: E0107 03:58:50.652034 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4dce042-2c6f-4a74-bbb3-84a79cfb02a1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.652046 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4dce042-2c6f-4a74-bbb3-84a79cfb02a1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 07 03:58:50 crc kubenswrapper[4980]: E0107 03:58:50.652073 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97db042a-c80c-464e-a960-6bfee98bcf94" containerName="registry-server" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.652081 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="97db042a-c80c-464e-a960-6bfee98bcf94" containerName="registry-server" Jan 07 03:58:50 crc kubenswrapper[4980]: E0107 03:58:50.652100 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97db042a-c80c-464e-a960-6bfee98bcf94" containerName="extract-utilities" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.652108 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="97db042a-c80c-464e-a960-6bfee98bcf94" containerName="extract-utilities" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.652337 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="97db042a-c80c-464e-a960-6bfee98bcf94" containerName="registry-server" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.652358 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4dce042-2c6f-4a74-bbb3-84a79cfb02a1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.653116 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.656897 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7ttf" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.657156 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.657876 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.661873 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.681943 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp"] Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.770424 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/111ee99f-4f5d-4647-9ee9-33addfaad13e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp\" (UID: \"111ee99f-4f5d-4647-9ee9-33addfaad13e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.770530 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/111ee99f-4f5d-4647-9ee9-33addfaad13e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp\" (UID: \"111ee99f-4f5d-4647-9ee9-33addfaad13e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.771318 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgvjl\" (UniqueName: \"kubernetes.io/projected/111ee99f-4f5d-4647-9ee9-33addfaad13e-kube-api-access-vgvjl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp\" (UID: \"111ee99f-4f5d-4647-9ee9-33addfaad13e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.872874 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/111ee99f-4f5d-4647-9ee9-33addfaad13e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp\" (UID: \"111ee99f-4f5d-4647-9ee9-33addfaad13e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.872976 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/111ee99f-4f5d-4647-9ee9-33addfaad13e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp\" (UID: \"111ee99f-4f5d-4647-9ee9-33addfaad13e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.873057 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgvjl\" (UniqueName: \"kubernetes.io/projected/111ee99f-4f5d-4647-9ee9-33addfaad13e-kube-api-access-vgvjl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp\" (UID: \"111ee99f-4f5d-4647-9ee9-33addfaad13e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.879613 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/111ee99f-4f5d-4647-9ee9-33addfaad13e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp\" (UID: \"111ee99f-4f5d-4647-9ee9-33addfaad13e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.881258 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/111ee99f-4f5d-4647-9ee9-33addfaad13e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp\" (UID: \"111ee99f-4f5d-4647-9ee9-33addfaad13e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp" Jan 07 03:58:50 crc kubenswrapper[4980]: I0107 03:58:50.903144 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgvjl\" (UniqueName: \"kubernetes.io/projected/111ee99f-4f5d-4647-9ee9-33addfaad13e-kube-api-access-vgvjl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp\" (UID: \"111ee99f-4f5d-4647-9ee9-33addfaad13e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp" Jan 07 03:58:51 crc kubenswrapper[4980]: I0107 03:58:51.007538 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp" Jan 07 03:58:51 crc kubenswrapper[4980]: I0107 03:58:51.608927 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp"] Jan 07 03:58:51 crc kubenswrapper[4980]: I0107 03:58:51.610931 4980 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 03:58:52 crc kubenswrapper[4980]: I0107 03:58:52.558969 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp" event={"ID":"111ee99f-4f5d-4647-9ee9-33addfaad13e","Type":"ContainerStarted","Data":"e13565ee1c4b9ead8345ca935abf00df6b0e625068dfd4eaf31b340ad42cab34"} Jan 07 03:58:52 crc kubenswrapper[4980]: I0107 03:58:52.735603 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 03:58:52 crc kubenswrapper[4980]: E0107 03:58:52.736025 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 03:58:53 crc kubenswrapper[4980]: I0107 03:58:53.579744 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp" event={"ID":"111ee99f-4f5d-4647-9ee9-33addfaad13e","Type":"ContainerStarted","Data":"f9f466937990a30065c8fd034366fd10a3d6d6ba8910105329b28625a428b389"} Jan 07 03:58:53 crc kubenswrapper[4980]: I0107 03:58:53.617619 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp" podStartSLOduration=2.945266078 podStartE2EDuration="3.617593971s" podCreationTimestamp="2026-01-07 03:58:50 +0000 UTC" firstStartedPulling="2026-01-07 03:58:51.610644754 +0000 UTC m=+1578.176339499" lastFinishedPulling="2026-01-07 03:58:52.282972657 +0000 UTC m=+1578.848667392" observedRunningTime="2026-01-07 03:58:53.616975741 +0000 UTC m=+1580.182670516" watchObservedRunningTime="2026-01-07 03:58:53.617593971 +0000 UTC m=+1580.183288746" Jan 07 03:58:55 crc kubenswrapper[4980]: I0107 03:58:55.933352 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xvvc9"] Jan 07 03:58:55 crc kubenswrapper[4980]: I0107 03:58:55.936665 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvvc9" Jan 07 03:58:55 crc kubenswrapper[4980]: I0107 03:58:55.959241 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xvvc9"] Jan 07 03:58:56 crc kubenswrapper[4980]: I0107 03:58:56.093164 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdxj2\" (UniqueName: \"kubernetes.io/projected/a814646d-1a2a-47fd-acc1-58f00f53c1bd-kube-api-access-sdxj2\") pod \"community-operators-xvvc9\" (UID: \"a814646d-1a2a-47fd-acc1-58f00f53c1bd\") " pod="openshift-marketplace/community-operators-xvvc9" Jan 07 03:58:56 crc kubenswrapper[4980]: I0107 03:58:56.093392 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a814646d-1a2a-47fd-acc1-58f00f53c1bd-catalog-content\") pod \"community-operators-xvvc9\" (UID: \"a814646d-1a2a-47fd-acc1-58f00f53c1bd\") " pod="openshift-marketplace/community-operators-xvvc9" Jan 07 03:58:56 crc kubenswrapper[4980]: I0107 03:58:56.093588 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a814646d-1a2a-47fd-acc1-58f00f53c1bd-utilities\") pod \"community-operators-xvvc9\" (UID: \"a814646d-1a2a-47fd-acc1-58f00f53c1bd\") " pod="openshift-marketplace/community-operators-xvvc9" Jan 07 03:58:56 crc kubenswrapper[4980]: I0107 03:58:56.195535 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdxj2\" (UniqueName: \"kubernetes.io/projected/a814646d-1a2a-47fd-acc1-58f00f53c1bd-kube-api-access-sdxj2\") pod \"community-operators-xvvc9\" (UID: \"a814646d-1a2a-47fd-acc1-58f00f53c1bd\") " pod="openshift-marketplace/community-operators-xvvc9" Jan 07 03:58:56 crc kubenswrapper[4980]: I0107 03:58:56.195743 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a814646d-1a2a-47fd-acc1-58f00f53c1bd-catalog-content\") pod \"community-operators-xvvc9\" (UID: \"a814646d-1a2a-47fd-acc1-58f00f53c1bd\") " pod="openshift-marketplace/community-operators-xvvc9" Jan 07 03:58:56 crc kubenswrapper[4980]: I0107 03:58:56.195878 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a814646d-1a2a-47fd-acc1-58f00f53c1bd-utilities\") pod \"community-operators-xvvc9\" (UID: \"a814646d-1a2a-47fd-acc1-58f00f53c1bd\") " pod="openshift-marketplace/community-operators-xvvc9" Jan 07 03:58:56 crc kubenswrapper[4980]: I0107 03:58:56.196504 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a814646d-1a2a-47fd-acc1-58f00f53c1bd-utilities\") pod \"community-operators-xvvc9\" (UID: \"a814646d-1a2a-47fd-acc1-58f00f53c1bd\") " pod="openshift-marketplace/community-operators-xvvc9" Jan 07 03:58:56 crc kubenswrapper[4980]: I0107 03:58:56.196594 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a814646d-1a2a-47fd-acc1-58f00f53c1bd-catalog-content\") pod \"community-operators-xvvc9\" (UID: \"a814646d-1a2a-47fd-acc1-58f00f53c1bd\") " pod="openshift-marketplace/community-operators-xvvc9" Jan 07 03:58:56 crc kubenswrapper[4980]: I0107 03:58:56.232008 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdxj2\" (UniqueName: \"kubernetes.io/projected/a814646d-1a2a-47fd-acc1-58f00f53c1bd-kube-api-access-sdxj2\") pod \"community-operators-xvvc9\" (UID: \"a814646d-1a2a-47fd-acc1-58f00f53c1bd\") " pod="openshift-marketplace/community-operators-xvvc9" Jan 07 03:58:56 crc kubenswrapper[4980]: I0107 03:58:56.262342 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvvc9" Jan 07 03:58:56 crc kubenswrapper[4980]: W0107 03:58:56.803517 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda814646d_1a2a_47fd_acc1_58f00f53c1bd.slice/crio-76097f2ad163317480d0aa3218a749b9d661fe5725c6ff11df25cceb4842c194 WatchSource:0}: Error finding container 76097f2ad163317480d0aa3218a749b9d661fe5725c6ff11df25cceb4842c194: Status 404 returned error can't find the container with id 76097f2ad163317480d0aa3218a749b9d661fe5725c6ff11df25cceb4842c194 Jan 07 03:58:56 crc kubenswrapper[4980]: I0107 03:58:56.804847 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xvvc9"] Jan 07 03:58:57 crc kubenswrapper[4980]: I0107 03:58:57.619873 4980 generic.go:334] "Generic (PLEG): container finished" podID="a814646d-1a2a-47fd-acc1-58f00f53c1bd" containerID="78888a9b33135451ecb880df1945c6ce2c01e08612216c88fe609dbfe086ed7d" exitCode=0 Jan 07 03:58:57 crc kubenswrapper[4980]: I0107 03:58:57.620046 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvc9" event={"ID":"a814646d-1a2a-47fd-acc1-58f00f53c1bd","Type":"ContainerDied","Data":"78888a9b33135451ecb880df1945c6ce2c01e08612216c88fe609dbfe086ed7d"} Jan 07 03:58:57 crc kubenswrapper[4980]: I0107 03:58:57.620170 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvc9" event={"ID":"a814646d-1a2a-47fd-acc1-58f00f53c1bd","Type":"ContainerStarted","Data":"76097f2ad163317480d0aa3218a749b9d661fe5725c6ff11df25cceb4842c194"} Jan 07 03:58:59 crc kubenswrapper[4980]: I0107 03:58:59.643578 4980 generic.go:334] "Generic (PLEG): container finished" podID="a814646d-1a2a-47fd-acc1-58f00f53c1bd" containerID="6913b818ee0010a25db981d606398c10c0bc8f4b2e904974b93969d396b878be" exitCode=0 Jan 07 03:58:59 crc kubenswrapper[4980]: I0107 03:58:59.643722 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvc9" event={"ID":"a814646d-1a2a-47fd-acc1-58f00f53c1bd","Type":"ContainerDied","Data":"6913b818ee0010a25db981d606398c10c0bc8f4b2e904974b93969d396b878be"} Jan 07 03:59:00 crc kubenswrapper[4980]: I0107 03:59:00.657875 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvc9" event={"ID":"a814646d-1a2a-47fd-acc1-58f00f53c1bd","Type":"ContainerStarted","Data":"14489cacf04455dc7330a7064cdb41f32bf83316965fd484f756cc0af8a165fa"} Jan 07 03:59:00 crc kubenswrapper[4980]: I0107 03:59:00.682968 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xvvc9" podStartSLOduration=3.170068473 podStartE2EDuration="5.682951161s" podCreationTimestamp="2026-01-07 03:58:55 +0000 UTC" firstStartedPulling="2026-01-07 03:58:57.621952769 +0000 UTC m=+1584.187647514" lastFinishedPulling="2026-01-07 03:59:00.134835457 +0000 UTC m=+1586.700530202" observedRunningTime="2026-01-07 03:59:00.680343541 +0000 UTC m=+1587.246038276" watchObservedRunningTime="2026-01-07 03:59:00.682951161 +0000 UTC m=+1587.248645896" Jan 07 03:59:03 crc kubenswrapper[4980]: I0107 03:59:03.745445 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 03:59:03 crc kubenswrapper[4980]: E0107 03:59:03.746202 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 03:59:06 crc kubenswrapper[4980]: I0107 03:59:06.263120 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xvvc9" Jan 07 03:59:06 crc kubenswrapper[4980]: I0107 03:59:06.263529 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xvvc9" Jan 07 03:59:06 crc kubenswrapper[4980]: I0107 03:59:06.327372 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xvvc9" Jan 07 03:59:06 crc kubenswrapper[4980]: I0107 03:59:06.790888 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xvvc9" Jan 07 03:59:06 crc kubenswrapper[4980]: I0107 03:59:06.850981 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xvvc9"] Jan 07 03:59:08 crc kubenswrapper[4980]: I0107 03:59:08.742901 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xvvc9" podUID="a814646d-1a2a-47fd-acc1-58f00f53c1bd" containerName="registry-server" containerID="cri-o://14489cacf04455dc7330a7064cdb41f32bf83316965fd484f756cc0af8a165fa" gracePeriod=2 Jan 07 03:59:09 crc kubenswrapper[4980]: I0107 03:59:09.761252 4980 generic.go:334] "Generic (PLEG): container finished" podID="a814646d-1a2a-47fd-acc1-58f00f53c1bd" containerID="14489cacf04455dc7330a7064cdb41f32bf83316965fd484f756cc0af8a165fa" exitCode=0 Jan 07 03:59:09 crc kubenswrapper[4980]: I0107 03:59:09.761353 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvc9" event={"ID":"a814646d-1a2a-47fd-acc1-58f00f53c1bd","Type":"ContainerDied","Data":"14489cacf04455dc7330a7064cdb41f32bf83316965fd484f756cc0af8a165fa"} Jan 07 03:59:09 crc kubenswrapper[4980]: I0107 03:59:09.832031 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvvc9" Jan 07 03:59:09 crc kubenswrapper[4980]: I0107 03:59:09.983406 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a814646d-1a2a-47fd-acc1-58f00f53c1bd-utilities\") pod \"a814646d-1a2a-47fd-acc1-58f00f53c1bd\" (UID: \"a814646d-1a2a-47fd-acc1-58f00f53c1bd\") " Jan 07 03:59:09 crc kubenswrapper[4980]: I0107 03:59:09.983885 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a814646d-1a2a-47fd-acc1-58f00f53c1bd-catalog-content\") pod \"a814646d-1a2a-47fd-acc1-58f00f53c1bd\" (UID: \"a814646d-1a2a-47fd-acc1-58f00f53c1bd\") " Jan 07 03:59:09 crc kubenswrapper[4980]: I0107 03:59:09.983956 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdxj2\" (UniqueName: \"kubernetes.io/projected/a814646d-1a2a-47fd-acc1-58f00f53c1bd-kube-api-access-sdxj2\") pod \"a814646d-1a2a-47fd-acc1-58f00f53c1bd\" (UID: \"a814646d-1a2a-47fd-acc1-58f00f53c1bd\") " Jan 07 03:59:09 crc kubenswrapper[4980]: I0107 03:59:09.988062 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a814646d-1a2a-47fd-acc1-58f00f53c1bd-utilities" (OuterVolumeSpecName: "utilities") pod "a814646d-1a2a-47fd-acc1-58f00f53c1bd" (UID: "a814646d-1a2a-47fd-acc1-58f00f53c1bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:59:09 crc kubenswrapper[4980]: I0107 03:59:09.993896 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a814646d-1a2a-47fd-acc1-58f00f53c1bd-kube-api-access-sdxj2" (OuterVolumeSpecName: "kube-api-access-sdxj2") pod "a814646d-1a2a-47fd-acc1-58f00f53c1bd" (UID: "a814646d-1a2a-47fd-acc1-58f00f53c1bd"). InnerVolumeSpecName "kube-api-access-sdxj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 03:59:10 crc kubenswrapper[4980]: I0107 03:59:10.070192 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a814646d-1a2a-47fd-acc1-58f00f53c1bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a814646d-1a2a-47fd-acc1-58f00f53c1bd" (UID: "a814646d-1a2a-47fd-acc1-58f00f53c1bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 03:59:10 crc kubenswrapper[4980]: I0107 03:59:10.086477 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a814646d-1a2a-47fd-acc1-58f00f53c1bd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 03:59:10 crc kubenswrapper[4980]: I0107 03:59:10.086514 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdxj2\" (UniqueName: \"kubernetes.io/projected/a814646d-1a2a-47fd-acc1-58f00f53c1bd-kube-api-access-sdxj2\") on node \"crc\" DevicePath \"\"" Jan 07 03:59:10 crc kubenswrapper[4980]: I0107 03:59:10.086530 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a814646d-1a2a-47fd-acc1-58f00f53c1bd-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 03:59:10 crc kubenswrapper[4980]: I0107 03:59:10.777092 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvc9" event={"ID":"a814646d-1a2a-47fd-acc1-58f00f53c1bd","Type":"ContainerDied","Data":"76097f2ad163317480d0aa3218a749b9d661fe5725c6ff11df25cceb4842c194"} Jan 07 03:59:10 crc kubenswrapper[4980]: I0107 03:59:10.777472 4980 scope.go:117] "RemoveContainer" containerID="14489cacf04455dc7330a7064cdb41f32bf83316965fd484f756cc0af8a165fa" Jan 07 03:59:10 crc kubenswrapper[4980]: I0107 03:59:10.777221 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvvc9" Jan 07 03:59:10 crc kubenswrapper[4980]: I0107 03:59:10.812548 4980 scope.go:117] "RemoveContainer" containerID="6913b818ee0010a25db981d606398c10c0bc8f4b2e904974b93969d396b878be" Jan 07 03:59:10 crc kubenswrapper[4980]: I0107 03:59:10.838842 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xvvc9"] Jan 07 03:59:10 crc kubenswrapper[4980]: I0107 03:59:10.852231 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xvvc9"] Jan 07 03:59:10 crc kubenswrapper[4980]: I0107 03:59:10.862021 4980 scope.go:117] "RemoveContainer" containerID="78888a9b33135451ecb880df1945c6ce2c01e08612216c88fe609dbfe086ed7d" Jan 07 03:59:11 crc kubenswrapper[4980]: I0107 03:59:11.755029 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a814646d-1a2a-47fd-acc1-58f00f53c1bd" path="/var/lib/kubelet/pods/a814646d-1a2a-47fd-acc1-58f00f53c1bd/volumes" Jan 07 03:59:18 crc kubenswrapper[4980]: I0107 03:59:18.735624 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 03:59:18 crc kubenswrapper[4980]: E0107 03:59:18.736678 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 03:59:30 crc kubenswrapper[4980]: I0107 03:59:30.735734 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 03:59:30 crc kubenswrapper[4980]: E0107 03:59:30.736628 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 03:59:39 crc kubenswrapper[4980]: I0107 03:59:39.581852 4980 scope.go:117] "RemoveContainer" containerID="7ab0573e2f0e4c9045afedb78ad1c2e8f951b0b643c94f4e94c2299e6ed059f7" Jan 07 03:59:39 crc kubenswrapper[4980]: I0107 03:59:39.616037 4980 scope.go:117] "RemoveContainer" containerID="1194a3e34a6352a689f25e7333b32840ceca1ba433c13e271ed95f668a4b6392" Jan 07 03:59:39 crc kubenswrapper[4980]: I0107 03:59:39.636067 4980 scope.go:117] "RemoveContainer" containerID="32c961cfd1a424af31ba752f24576c915f6b405b2eb0fc4db207d845031164c2" Jan 07 03:59:39 crc kubenswrapper[4980]: I0107 03:59:39.652736 4980 scope.go:117] "RemoveContainer" containerID="3fd56efc40b629351ca9e5f91e0bbcdc917b6a0b994a94e4bb95f6376c342453" Jan 07 03:59:42 crc kubenswrapper[4980]: I0107 03:59:42.740264 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 03:59:42 crc kubenswrapper[4980]: E0107 03:59:42.742904 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 03:59:54 crc kubenswrapper[4980]: I0107 03:59:54.057645 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-58ef-account-create-update-hg8wd"] Jan 07 03:59:54 crc kubenswrapper[4980]: I0107 03:59:54.065321 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-58ef-account-create-update-hg8wd"] Jan 07 03:59:54 crc kubenswrapper[4980]: I0107 03:59:54.736127 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 03:59:54 crc kubenswrapper[4980]: E0107 03:59:54.736444 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 03:59:55 crc kubenswrapper[4980]: I0107 03:59:55.048124 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-41b5-account-create-update-wk7dn"] Jan 07 03:59:55 crc kubenswrapper[4980]: I0107 03:59:55.058505 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-41b5-account-create-update-wk7dn"] Jan 07 03:59:55 crc kubenswrapper[4980]: I0107 03:59:55.070694 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e29c-account-create-update-wmsz7"] Jan 07 03:59:55 crc kubenswrapper[4980]: I0107 03:59:55.079181 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-wf7kv"] Jan 07 03:59:55 crc kubenswrapper[4980]: I0107 03:59:55.086457 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-c9d84"] Jan 07 03:59:55 crc kubenswrapper[4980]: I0107 03:59:55.093443 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-pcrf5"] Jan 07 03:59:55 crc kubenswrapper[4980]: I0107 03:59:55.101690 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-wf7kv"] Jan 07 03:59:55 crc kubenswrapper[4980]: I0107 03:59:55.108638 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e29c-account-create-update-wmsz7"] Jan 07 03:59:55 crc kubenswrapper[4980]: I0107 03:59:55.116137 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-c9d84"] Jan 07 03:59:55 crc kubenswrapper[4980]: I0107 03:59:55.124372 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-pcrf5"] Jan 07 03:59:55 crc kubenswrapper[4980]: I0107 03:59:55.755172 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d6bb0d3-37af-4da6-a806-b276b642fabe" path="/var/lib/kubelet/pods/1d6bb0d3-37af-4da6-a806-b276b642fabe/volumes" Jan 07 03:59:55 crc kubenswrapper[4980]: I0107 03:59:55.756688 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="743899ef-fe87-4dfb-9286-d9a68ade43c6" path="/var/lib/kubelet/pods/743899ef-fe87-4dfb-9286-d9a68ade43c6/volumes" Jan 07 03:59:55 crc kubenswrapper[4980]: I0107 03:59:55.758137 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7608666e-e3a7-4b17-ac4e-9fcacb09ccca" path="/var/lib/kubelet/pods/7608666e-e3a7-4b17-ac4e-9fcacb09ccca/volumes" Jan 07 03:59:55 crc kubenswrapper[4980]: I0107 03:59:55.759221 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8030272c-dd7c-4eb4-822f-29fdff143d62" path="/var/lib/kubelet/pods/8030272c-dd7c-4eb4-822f-29fdff143d62/volumes" Jan 07 03:59:55 crc kubenswrapper[4980]: I0107 03:59:55.761347 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb24e5f0-b43d-4512-ab23-b340b8b97c1d" path="/var/lib/kubelet/pods/cb24e5f0-b43d-4512-ab23-b340b8b97c1d/volumes" Jan 07 03:59:55 crc kubenswrapper[4980]: I0107 03:59:55.762491 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7d0db6-cb2a-46cb-957b-2ec9db253878" path="/var/lib/kubelet/pods/ce7d0db6-cb2a-46cb-957b-2ec9db253878/volumes" Jan 07 03:59:59 crc kubenswrapper[4980]: I0107 03:59:59.030241 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pfwbk"] Jan 07 03:59:59 crc kubenswrapper[4980]: I0107 03:59:59.037257 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pfwbk"] Jan 07 03:59:59 crc kubenswrapper[4980]: I0107 03:59:59.757189 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f38553ab-2b7d-486b-9290-161ef0dd23b3" path="/var/lib/kubelet/pods/f38553ab-2b7d-486b-9290-161ef0dd23b3/volumes" Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.176163 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598"] Jan 07 04:00:00 crc kubenswrapper[4980]: E0107 04:00:00.177912 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a814646d-1a2a-47fd-acc1-58f00f53c1bd" containerName="extract-utilities" Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.177963 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="a814646d-1a2a-47fd-acc1-58f00f53c1bd" containerName="extract-utilities" Jan 07 04:00:00 crc kubenswrapper[4980]: E0107 04:00:00.178012 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a814646d-1a2a-47fd-acc1-58f00f53c1bd" containerName="registry-server" Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.178030 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="a814646d-1a2a-47fd-acc1-58f00f53c1bd" containerName="registry-server" Jan 07 04:00:00 crc kubenswrapper[4980]: E0107 04:00:00.178144 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a814646d-1a2a-47fd-acc1-58f00f53c1bd" containerName="extract-content" Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.178159 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="a814646d-1a2a-47fd-acc1-58f00f53c1bd" containerName="extract-content" Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.182137 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="a814646d-1a2a-47fd-acc1-58f00f53c1bd" containerName="registry-server" Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.184906 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598" Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.197068 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.197322 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.231710 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598"] Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.323319 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee618e14-44ba-4a43-9ce3-b933ddc708fa-secret-volume\") pod \"collect-profiles-29462640-x2598\" (UID: \"ee618e14-44ba-4a43-9ce3-b933ddc708fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598" Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.323775 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cww6t\" (UniqueName: \"kubernetes.io/projected/ee618e14-44ba-4a43-9ce3-b933ddc708fa-kube-api-access-cww6t\") pod \"collect-profiles-29462640-x2598\" (UID: \"ee618e14-44ba-4a43-9ce3-b933ddc708fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598" Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.323826 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee618e14-44ba-4a43-9ce3-b933ddc708fa-config-volume\") pod \"collect-profiles-29462640-x2598\" (UID: \"ee618e14-44ba-4a43-9ce3-b933ddc708fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598" Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.426267 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee618e14-44ba-4a43-9ce3-b933ddc708fa-secret-volume\") pod \"collect-profiles-29462640-x2598\" (UID: \"ee618e14-44ba-4a43-9ce3-b933ddc708fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598" Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.426424 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cww6t\" (UniqueName: \"kubernetes.io/projected/ee618e14-44ba-4a43-9ce3-b933ddc708fa-kube-api-access-cww6t\") pod \"collect-profiles-29462640-x2598\" (UID: \"ee618e14-44ba-4a43-9ce3-b933ddc708fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598" Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.426477 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee618e14-44ba-4a43-9ce3-b933ddc708fa-config-volume\") pod \"collect-profiles-29462640-x2598\" (UID: \"ee618e14-44ba-4a43-9ce3-b933ddc708fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598" Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.428166 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee618e14-44ba-4a43-9ce3-b933ddc708fa-config-volume\") pod \"collect-profiles-29462640-x2598\" (UID: \"ee618e14-44ba-4a43-9ce3-b933ddc708fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598" Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.434048 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee618e14-44ba-4a43-9ce3-b933ddc708fa-secret-volume\") pod \"collect-profiles-29462640-x2598\" (UID: \"ee618e14-44ba-4a43-9ce3-b933ddc708fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598" Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.450044 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cww6t\" (UniqueName: \"kubernetes.io/projected/ee618e14-44ba-4a43-9ce3-b933ddc708fa-kube-api-access-cww6t\") pod \"collect-profiles-29462640-x2598\" (UID: \"ee618e14-44ba-4a43-9ce3-b933ddc708fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598" Jan 07 04:00:00 crc kubenswrapper[4980]: I0107 04:00:00.533002 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598" Jan 07 04:00:01 crc kubenswrapper[4980]: I0107 04:00:01.027135 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598"] Jan 07 04:00:01 crc kubenswrapper[4980]: I0107 04:00:01.389530 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598" event={"ID":"ee618e14-44ba-4a43-9ce3-b933ddc708fa","Type":"ContainerStarted","Data":"8a217cb936d622019ce70ff748e7e3e92e2001f856611ef504e4e4e030cb2fde"} Jan 07 04:00:01 crc kubenswrapper[4980]: I0107 04:00:01.389988 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598" event={"ID":"ee618e14-44ba-4a43-9ce3-b933ddc708fa","Type":"ContainerStarted","Data":"4c2eed9f06b15e1c50edf7c8563c795dcc0d70663d38df2c4e6993994f20113b"} Jan 07 04:00:01 crc kubenswrapper[4980]: I0107 04:00:01.421681 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598" podStartSLOduration=1.421658109 podStartE2EDuration="1.421658109s" podCreationTimestamp="2026-01-07 04:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 04:00:01.410980259 +0000 UTC m=+1647.976675024" watchObservedRunningTime="2026-01-07 04:00:01.421658109 +0000 UTC m=+1647.987352854" Jan 07 04:00:02 crc kubenswrapper[4980]: I0107 04:00:02.403393 4980 generic.go:334] "Generic (PLEG): container finished" podID="ee618e14-44ba-4a43-9ce3-b933ddc708fa" containerID="8a217cb936d622019ce70ff748e7e3e92e2001f856611ef504e4e4e030cb2fde" exitCode=0 Jan 07 04:00:02 crc kubenswrapper[4980]: I0107 04:00:02.403526 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598" event={"ID":"ee618e14-44ba-4a43-9ce3-b933ddc708fa","Type":"ContainerDied","Data":"8a217cb936d622019ce70ff748e7e3e92e2001f856611ef504e4e4e030cb2fde"} Jan 07 04:00:03 crc kubenswrapper[4980]: I0107 04:00:03.860246 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598" Jan 07 04:00:04 crc kubenswrapper[4980]: I0107 04:00:04.005261 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee618e14-44ba-4a43-9ce3-b933ddc708fa-secret-volume\") pod \"ee618e14-44ba-4a43-9ce3-b933ddc708fa\" (UID: \"ee618e14-44ba-4a43-9ce3-b933ddc708fa\") " Jan 07 04:00:04 crc kubenswrapper[4980]: I0107 04:00:04.005388 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cww6t\" (UniqueName: \"kubernetes.io/projected/ee618e14-44ba-4a43-9ce3-b933ddc708fa-kube-api-access-cww6t\") pod \"ee618e14-44ba-4a43-9ce3-b933ddc708fa\" (UID: \"ee618e14-44ba-4a43-9ce3-b933ddc708fa\") " Jan 07 04:00:04 crc kubenswrapper[4980]: I0107 04:00:04.005597 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee618e14-44ba-4a43-9ce3-b933ddc708fa-config-volume\") pod \"ee618e14-44ba-4a43-9ce3-b933ddc708fa\" (UID: \"ee618e14-44ba-4a43-9ce3-b933ddc708fa\") " Jan 07 04:00:04 crc kubenswrapper[4980]: I0107 04:00:04.006145 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee618e14-44ba-4a43-9ce3-b933ddc708fa-config-volume" (OuterVolumeSpecName: "config-volume") pod "ee618e14-44ba-4a43-9ce3-b933ddc708fa" (UID: "ee618e14-44ba-4a43-9ce3-b933ddc708fa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 04:00:04 crc kubenswrapper[4980]: I0107 04:00:04.006316 4980 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee618e14-44ba-4a43-9ce3-b933ddc708fa-config-volume\") on node \"crc\" DevicePath \"\"" Jan 07 04:00:04 crc kubenswrapper[4980]: I0107 04:00:04.014171 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee618e14-44ba-4a43-9ce3-b933ddc708fa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ee618e14-44ba-4a43-9ce3-b933ddc708fa" (UID: "ee618e14-44ba-4a43-9ce3-b933ddc708fa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:00:04 crc kubenswrapper[4980]: I0107 04:00:04.014287 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee618e14-44ba-4a43-9ce3-b933ddc708fa-kube-api-access-cww6t" (OuterVolumeSpecName: "kube-api-access-cww6t") pod "ee618e14-44ba-4a43-9ce3-b933ddc708fa" (UID: "ee618e14-44ba-4a43-9ce3-b933ddc708fa"). InnerVolumeSpecName "kube-api-access-cww6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:00:04 crc kubenswrapper[4980]: I0107 04:00:04.108193 4980 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee618e14-44ba-4a43-9ce3-b933ddc708fa-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 07 04:00:04 crc kubenswrapper[4980]: I0107 04:00:04.108244 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cww6t\" (UniqueName: \"kubernetes.io/projected/ee618e14-44ba-4a43-9ce3-b933ddc708fa-kube-api-access-cww6t\") on node \"crc\" DevicePath \"\"" Jan 07 04:00:04 crc kubenswrapper[4980]: I0107 04:00:04.456033 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598" event={"ID":"ee618e14-44ba-4a43-9ce3-b933ddc708fa","Type":"ContainerDied","Data":"4c2eed9f06b15e1c50edf7c8563c795dcc0d70663d38df2c4e6993994f20113b"} Jan 07 04:00:04 crc kubenswrapper[4980]: I0107 04:00:04.456896 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598" Jan 07 04:00:04 crc kubenswrapper[4980]: I0107 04:00:04.457703 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c2eed9f06b15e1c50edf7c8563c795dcc0d70663d38df2c4e6993994f20113b" Jan 07 04:00:05 crc kubenswrapper[4980]: I0107 04:00:05.736758 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 04:00:05 crc kubenswrapper[4980]: E0107 04:00:05.737719 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:00:18 crc kubenswrapper[4980]: I0107 04:00:18.060302 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-32f7-account-create-update-5k5mk"] Jan 07 04:00:18 crc kubenswrapper[4980]: I0107 04:00:18.073609 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-m58bl"] Jan 07 04:00:18 crc kubenswrapper[4980]: I0107 04:00:18.084268 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-zd4fd"] Jan 07 04:00:18 crc kubenswrapper[4980]: I0107 04:00:18.095207 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-m58bl"] Jan 07 04:00:18 crc kubenswrapper[4980]: I0107 04:00:18.104537 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-32f7-account-create-update-5k5mk"] Jan 07 04:00:18 crc kubenswrapper[4980]: I0107 04:00:18.111363 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-zd4fd"] Jan 07 04:00:19 crc kubenswrapper[4980]: I0107 04:00:19.735927 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 04:00:19 crc kubenswrapper[4980]: E0107 04:00:19.737190 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:00:19 crc kubenswrapper[4980]: I0107 04:00:19.753819 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a530ce0-0dd8-46af-87ae-d2c63cb588f3" path="/var/lib/kubelet/pods/7a530ce0-0dd8-46af-87ae-d2c63cb588f3/volumes" Jan 07 04:00:19 crc kubenswrapper[4980]: I0107 04:00:19.755182 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f4907f9-9701-4aea-8ced-7a3f130bea66" path="/var/lib/kubelet/pods/8f4907f9-9701-4aea-8ced-7a3f130bea66/volumes" Jan 07 04:00:19 crc kubenswrapper[4980]: I0107 04:00:19.756294 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d58f3e70-86ff-4d8b-8422-45ae65b067d6" path="/var/lib/kubelet/pods/d58f3e70-86ff-4d8b-8422-45ae65b067d6/volumes" Jan 07 04:00:21 crc kubenswrapper[4980]: I0107 04:00:21.038707 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-d9v7c"] Jan 07 04:00:21 crc kubenswrapper[4980]: I0107 04:00:21.049609 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c79a-account-create-update-btmmk"] Jan 07 04:00:21 crc kubenswrapper[4980]: I0107 04:00:21.061683 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-mp6vb"] Jan 07 04:00:21 crc kubenswrapper[4980]: I0107 04:00:21.071888 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0d12-account-create-update-rlkh2"] Jan 07 04:00:21 crc kubenswrapper[4980]: I0107 04:00:21.080617 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-d9v7c"] Jan 07 04:00:21 crc kubenswrapper[4980]: I0107 04:00:21.090108 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-mp6vb"] Jan 07 04:00:21 crc kubenswrapper[4980]: I0107 04:00:21.100494 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0d12-account-create-update-rlkh2"] Jan 07 04:00:21 crc kubenswrapper[4980]: I0107 04:00:21.106887 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c79a-account-create-update-btmmk"] Jan 07 04:00:21 crc kubenswrapper[4980]: I0107 04:00:21.755781 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a49206c-706d-4cbf-b4fb-9cebdb720837" path="/var/lib/kubelet/pods/2a49206c-706d-4cbf-b4fb-9cebdb720837/volumes" Jan 07 04:00:21 crc kubenswrapper[4980]: I0107 04:00:21.758007 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="540d5527-6e10-4850-b025-b0a8b62a8fc7" path="/var/lib/kubelet/pods/540d5527-6e10-4850-b025-b0a8b62a8fc7/volumes" Jan 07 04:00:21 crc kubenswrapper[4980]: I0107 04:00:21.759457 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a87a557a-be2d-477a-afe4-315cd7b49f9a" path="/var/lib/kubelet/pods/a87a557a-be2d-477a-afe4-315cd7b49f9a/volumes" Jan 07 04:00:21 crc kubenswrapper[4980]: I0107 04:00:21.761257 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8aaabcf-42ed-4586-8d72-bb9e14a8d369" path="/var/lib/kubelet/pods/a8aaabcf-42ed-4586-8d72-bb9e14a8d369/volumes" Jan 07 04:00:32 crc kubenswrapper[4980]: I0107 04:00:32.060920 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-hxl82"] Jan 07 04:00:32 crc kubenswrapper[4980]: I0107 04:00:32.075023 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-hxl82"] Jan 07 04:00:33 crc kubenswrapper[4980]: I0107 04:00:33.736953 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 04:00:33 crc kubenswrapper[4980]: E0107 04:00:33.737954 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:00:33 crc kubenswrapper[4980]: I0107 04:00:33.755807 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f098f890-2064-479b-bd73-cf3269c4f3c2" path="/var/lib/kubelet/pods/f098f890-2064-479b-bd73-cf3269c4f3c2/volumes" Jan 07 04:00:38 crc kubenswrapper[4980]: I0107 04:00:37.866274 4980 generic.go:334] "Generic (PLEG): container finished" podID="111ee99f-4f5d-4647-9ee9-33addfaad13e" containerID="f9f466937990a30065c8fd034366fd10a3d6d6ba8910105329b28625a428b389" exitCode=0 Jan 07 04:00:38 crc kubenswrapper[4980]: I0107 04:00:37.866351 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp" event={"ID":"111ee99f-4f5d-4647-9ee9-33addfaad13e","Type":"ContainerDied","Data":"f9f466937990a30065c8fd034366fd10a3d6d6ba8910105329b28625a428b389"} Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.422109 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.545641 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgvjl\" (UniqueName: \"kubernetes.io/projected/111ee99f-4f5d-4647-9ee9-33addfaad13e-kube-api-access-vgvjl\") pod \"111ee99f-4f5d-4647-9ee9-33addfaad13e\" (UID: \"111ee99f-4f5d-4647-9ee9-33addfaad13e\") " Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.545720 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/111ee99f-4f5d-4647-9ee9-33addfaad13e-inventory\") pod \"111ee99f-4f5d-4647-9ee9-33addfaad13e\" (UID: \"111ee99f-4f5d-4647-9ee9-33addfaad13e\") " Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.545848 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/111ee99f-4f5d-4647-9ee9-33addfaad13e-ssh-key-openstack-edpm-ipam\") pod \"111ee99f-4f5d-4647-9ee9-33addfaad13e\" (UID: \"111ee99f-4f5d-4647-9ee9-33addfaad13e\") " Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.561858 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/111ee99f-4f5d-4647-9ee9-33addfaad13e-kube-api-access-vgvjl" (OuterVolumeSpecName: "kube-api-access-vgvjl") pod "111ee99f-4f5d-4647-9ee9-33addfaad13e" (UID: "111ee99f-4f5d-4647-9ee9-33addfaad13e"). InnerVolumeSpecName "kube-api-access-vgvjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:00:39 crc kubenswrapper[4980]: E0107 04:00:39.600054 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/111ee99f-4f5d-4647-9ee9-33addfaad13e-inventory podName:111ee99f-4f5d-4647-9ee9-33addfaad13e nodeName:}" failed. No retries permitted until 2026-01-07 04:00:40.100028112 +0000 UTC m=+1686.665722847 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/111ee99f-4f5d-4647-9ee9-33addfaad13e-inventory") pod "111ee99f-4f5d-4647-9ee9-33addfaad13e" (UID: "111ee99f-4f5d-4647-9ee9-33addfaad13e") : error deleting /var/lib/kubelet/pods/111ee99f-4f5d-4647-9ee9-33addfaad13e/volume-subpaths: remove /var/lib/kubelet/pods/111ee99f-4f5d-4647-9ee9-33addfaad13e/volume-subpaths: no such file or directory Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.608719 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/111ee99f-4f5d-4647-9ee9-33addfaad13e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "111ee99f-4f5d-4647-9ee9-33addfaad13e" (UID: "111ee99f-4f5d-4647-9ee9-33addfaad13e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.648059 4980 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/111ee99f-4f5d-4647-9ee9-33addfaad13e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.648093 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgvjl\" (UniqueName: \"kubernetes.io/projected/111ee99f-4f5d-4647-9ee9-33addfaad13e-kube-api-access-vgvjl\") on node \"crc\" DevicePath \"\"" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.784630 4980 scope.go:117] "RemoveContainer" containerID="ecd0933c106a7c591ee289deb4f1a1677096a62a762cf0bb1336196203bf30cf" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.816997 4980 scope.go:117] "RemoveContainer" containerID="04c50f021ba577bcc0bad4d94ec431b2f7d587bde330848f7fc2c3cbb911c2ed" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.832558 4980 scope.go:117] "RemoveContainer" containerID="5d00658dc783f9a775da18be3f0a5e95d9480eb583fa5823ef1d921cc4335877" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.854518 4980 scope.go:117] "RemoveContainer" containerID="b4867fd7ff6483d2ff4162e9b34ad8e0052d09b19835b8c65219f8203de8f007" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.875172 4980 scope.go:117] "RemoveContainer" containerID="f688f3372fa4d8c9bff711a308dc77267795e3401909f2c964e1991f0fabd752" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.894521 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp" event={"ID":"111ee99f-4f5d-4647-9ee9-33addfaad13e","Type":"ContainerDied","Data":"e13565ee1c4b9ead8345ca935abf00df6b0e625068dfd4eaf31b340ad42cab34"} Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.894556 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e13565ee1c4b9ead8345ca935abf00df6b0e625068dfd4eaf31b340ad42cab34" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.894611 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.899514 4980 scope.go:117] "RemoveContainer" containerID="72117884f66dffe7458112b6bfaabb836f07dbc928f19afae06412ca930fac12" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.928033 4980 scope.go:117] "RemoveContainer" containerID="3fcf3512b3a684e46f61960dbfe9f67cf3ae98696d92d7be15eff3a5d32f0ac8" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.960076 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr"] Jan 07 04:00:39 crc kubenswrapper[4980]: E0107 04:00:39.960435 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111ee99f-4f5d-4647-9ee9-33addfaad13e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.960451 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="111ee99f-4f5d-4647-9ee9-33addfaad13e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 07 04:00:39 crc kubenswrapper[4980]: E0107 04:00:39.960474 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee618e14-44ba-4a43-9ce3-b933ddc708fa" containerName="collect-profiles" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.960479 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee618e14-44ba-4a43-9ce3-b933ddc708fa" containerName="collect-profiles" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.960660 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee618e14-44ba-4a43-9ce3-b933ddc708fa" containerName="collect-profiles" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.960683 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="111ee99f-4f5d-4647-9ee9-33addfaad13e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.961224 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.962713 4980 scope.go:117] "RemoveContainer" containerID="cbdce93e00678741b45332c37336b592b0bfcd17656e3ccaca3e193adfdef1f0" Jan 07 04:00:39 crc kubenswrapper[4980]: I0107 04:00:39.988697 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr"] Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.000856 4980 scope.go:117] "RemoveContainer" containerID="b8257186eb6cb11ccc7ddd36ad899460d58368d15d4884b277ca743a1a8f6fcc" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.026147 4980 scope.go:117] "RemoveContainer" containerID="b8c1359660a18df61abcf12b96cdf5c5566a95c78eb330ddf61adccccb82c95a" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.041740 4980 scope.go:117] "RemoveContainer" containerID="bcae7e3831f50d1b690aec5bdb3e624ee460003689f9c829efbf74ac0a85ba79" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.057134 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67224118-a228-4d50-a70e-1d675bd7df2e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7pljr\" (UID: \"67224118-a228-4d50-a70e-1d675bd7df2e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.057314 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl6xq\" (UniqueName: \"kubernetes.io/projected/67224118-a228-4d50-a70e-1d675bd7df2e-kube-api-access-nl6xq\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7pljr\" (UID: \"67224118-a228-4d50-a70e-1d675bd7df2e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.057348 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67224118-a228-4d50-a70e-1d675bd7df2e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7pljr\" (UID: \"67224118-a228-4d50-a70e-1d675bd7df2e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.061524 4980 scope.go:117] "RemoveContainer" containerID="959143afb9e7179dff2e94c6d490634843d2e22e8625aa5ec54479ba41950dae" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.080367 4980 scope.go:117] "RemoveContainer" containerID="830aac1410ed7ed865c739dd074f0ac3fdd9e1fb4c9bbd628e9e8e631eef0301" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.098490 4980 scope.go:117] "RemoveContainer" containerID="35bddfeec114506eaa0c6f8cbbc19b864e3ca187252f52e611dde2e2aafaad89" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.116986 4980 scope.go:117] "RemoveContainer" containerID="2d675c963fdc9291c38b465e0745ca435a711a3deacc665a0f9d227ea8d6eab5" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.159044 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/111ee99f-4f5d-4647-9ee9-33addfaad13e-inventory\") pod \"111ee99f-4f5d-4647-9ee9-33addfaad13e\" (UID: \"111ee99f-4f5d-4647-9ee9-33addfaad13e\") " Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.159333 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl6xq\" (UniqueName: \"kubernetes.io/projected/67224118-a228-4d50-a70e-1d675bd7df2e-kube-api-access-nl6xq\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7pljr\" (UID: \"67224118-a228-4d50-a70e-1d675bd7df2e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.159374 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67224118-a228-4d50-a70e-1d675bd7df2e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7pljr\" (UID: \"67224118-a228-4d50-a70e-1d675bd7df2e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.159397 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67224118-a228-4d50-a70e-1d675bd7df2e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7pljr\" (UID: \"67224118-a228-4d50-a70e-1d675bd7df2e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.162958 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67224118-a228-4d50-a70e-1d675bd7df2e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7pljr\" (UID: \"67224118-a228-4d50-a70e-1d675bd7df2e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.165144 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67224118-a228-4d50-a70e-1d675bd7df2e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7pljr\" (UID: \"67224118-a228-4d50-a70e-1d675bd7df2e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.169412 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/111ee99f-4f5d-4647-9ee9-33addfaad13e-inventory" (OuterVolumeSpecName: "inventory") pod "111ee99f-4f5d-4647-9ee9-33addfaad13e" (UID: "111ee99f-4f5d-4647-9ee9-33addfaad13e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.177922 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl6xq\" (UniqueName: \"kubernetes.io/projected/67224118-a228-4d50-a70e-1d675bd7df2e-kube-api-access-nl6xq\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7pljr\" (UID: \"67224118-a228-4d50-a70e-1d675bd7df2e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.261330 4980 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/111ee99f-4f5d-4647-9ee9-33addfaad13e-inventory\") on node \"crc\" DevicePath \"\"" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.282023 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr" Jan 07 04:00:40 crc kubenswrapper[4980]: I0107 04:00:40.934099 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr"] Jan 07 04:00:41 crc kubenswrapper[4980]: I0107 04:00:41.916300 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr" event={"ID":"67224118-a228-4d50-a70e-1d675bd7df2e","Type":"ContainerStarted","Data":"05785d4b180a604a5506037946371c170ba272305ca09be1846dc981b060ad35"} Jan 07 04:00:41 crc kubenswrapper[4980]: I0107 04:00:41.916692 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr" event={"ID":"67224118-a228-4d50-a70e-1d675bd7df2e","Type":"ContainerStarted","Data":"aa46a542dca99644218f5d476773f780f3d15c6c2c3710e1dc127470fa271eb0"} Jan 07 04:00:41 crc kubenswrapper[4980]: I0107 04:00:41.944966 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr" podStartSLOduration=2.41873938 podStartE2EDuration="2.944936751s" podCreationTimestamp="2026-01-07 04:00:39 +0000 UTC" firstStartedPulling="2026-01-07 04:00:40.942628147 +0000 UTC m=+1687.508322882" lastFinishedPulling="2026-01-07 04:00:41.468825528 +0000 UTC m=+1688.034520253" observedRunningTime="2026-01-07 04:00:41.931476704 +0000 UTC m=+1688.497171489" watchObservedRunningTime="2026-01-07 04:00:41.944936751 +0000 UTC m=+1688.510631526" Jan 07 04:00:47 crc kubenswrapper[4980]: I0107 04:00:47.736511 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 04:00:47 crc kubenswrapper[4980]: E0107 04:00:47.738888 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:01:00 crc kubenswrapper[4980]: I0107 04:01:00.163488 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29462641-qspbr"] Jan 07 04:01:00 crc kubenswrapper[4980]: I0107 04:01:00.166134 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29462641-qspbr" Jan 07 04:01:00 crc kubenswrapper[4980]: I0107 04:01:00.188184 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29462641-qspbr"] Jan 07 04:01:00 crc kubenswrapper[4980]: I0107 04:01:00.266736 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjsgp\" (UniqueName: \"kubernetes.io/projected/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-kube-api-access-mjsgp\") pod \"keystone-cron-29462641-qspbr\" (UID: \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\") " pod="openstack/keystone-cron-29462641-qspbr" Jan 07 04:01:00 crc kubenswrapper[4980]: I0107 04:01:00.266795 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-config-data\") pod \"keystone-cron-29462641-qspbr\" (UID: \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\") " pod="openstack/keystone-cron-29462641-qspbr" Jan 07 04:01:00 crc kubenswrapper[4980]: I0107 04:01:00.266837 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-combined-ca-bundle\") pod \"keystone-cron-29462641-qspbr\" (UID: \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\") " pod="openstack/keystone-cron-29462641-qspbr" Jan 07 04:01:00 crc kubenswrapper[4980]: I0107 04:01:00.267093 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-fernet-keys\") pod \"keystone-cron-29462641-qspbr\" (UID: \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\") " pod="openstack/keystone-cron-29462641-qspbr" Jan 07 04:01:00 crc kubenswrapper[4980]: I0107 04:01:00.368152 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-fernet-keys\") pod \"keystone-cron-29462641-qspbr\" (UID: \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\") " pod="openstack/keystone-cron-29462641-qspbr" Jan 07 04:01:00 crc kubenswrapper[4980]: I0107 04:01:00.368239 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjsgp\" (UniqueName: \"kubernetes.io/projected/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-kube-api-access-mjsgp\") pod \"keystone-cron-29462641-qspbr\" (UID: \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\") " pod="openstack/keystone-cron-29462641-qspbr" Jan 07 04:01:00 crc kubenswrapper[4980]: I0107 04:01:00.368283 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-config-data\") pod \"keystone-cron-29462641-qspbr\" (UID: \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\") " pod="openstack/keystone-cron-29462641-qspbr" Jan 07 04:01:00 crc kubenswrapper[4980]: I0107 04:01:00.368330 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-combined-ca-bundle\") pod \"keystone-cron-29462641-qspbr\" (UID: \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\") " pod="openstack/keystone-cron-29462641-qspbr" Jan 07 04:01:00 crc kubenswrapper[4980]: I0107 04:01:00.376620 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-fernet-keys\") pod \"keystone-cron-29462641-qspbr\" (UID: \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\") " pod="openstack/keystone-cron-29462641-qspbr" Jan 07 04:01:00 crc kubenswrapper[4980]: I0107 04:01:00.376878 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-combined-ca-bundle\") pod \"keystone-cron-29462641-qspbr\" (UID: \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\") " pod="openstack/keystone-cron-29462641-qspbr" Jan 07 04:01:00 crc kubenswrapper[4980]: I0107 04:01:00.378286 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-config-data\") pod \"keystone-cron-29462641-qspbr\" (UID: \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\") " pod="openstack/keystone-cron-29462641-qspbr" Jan 07 04:01:00 crc kubenswrapper[4980]: I0107 04:01:00.404617 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjsgp\" (UniqueName: \"kubernetes.io/projected/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-kube-api-access-mjsgp\") pod \"keystone-cron-29462641-qspbr\" (UID: \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\") " pod="openstack/keystone-cron-29462641-qspbr" Jan 07 04:01:00 crc kubenswrapper[4980]: I0107 04:01:00.484483 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29462641-qspbr" Jan 07 04:01:00 crc kubenswrapper[4980]: I0107 04:01:00.989382 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29462641-qspbr"] Jan 07 04:01:01 crc kubenswrapper[4980]: I0107 04:01:01.120119 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29462641-qspbr" event={"ID":"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd","Type":"ContainerStarted","Data":"08d4af2467c318a0d52ef70bda2e47b922b24f48b9c16514ae23210fe6a0fc57"} Jan 07 04:01:01 crc kubenswrapper[4980]: I0107 04:01:01.736234 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 04:01:01 crc kubenswrapper[4980]: E0107 04:01:01.737220 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:01:02 crc kubenswrapper[4980]: I0107 04:01:02.051326 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-t6522"] Jan 07 04:01:02 crc kubenswrapper[4980]: I0107 04:01:02.059779 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-t6522"] Jan 07 04:01:02 crc kubenswrapper[4980]: I0107 04:01:02.130745 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29462641-qspbr" event={"ID":"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd","Type":"ContainerStarted","Data":"de35715777dc12212c1632ec00b4b43729862b91ac57c4c0a80a8393ef90486b"} Jan 07 04:01:02 crc kubenswrapper[4980]: I0107 04:01:02.162221 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29462641-qspbr" podStartSLOduration=2.162185047 podStartE2EDuration="2.162185047s" podCreationTimestamp="2026-01-07 04:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 04:01:02.156053059 +0000 UTC m=+1708.721747834" watchObservedRunningTime="2026-01-07 04:01:02.162185047 +0000 UTC m=+1708.727879812" Jan 07 04:01:03 crc kubenswrapper[4980]: I0107 04:01:03.752496 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d250b696-8adc-41af-8a3c-36c7a87a721f" path="/var/lib/kubelet/pods/d250b696-8adc-41af-8a3c-36c7a87a721f/volumes" Jan 07 04:01:04 crc kubenswrapper[4980]: I0107 04:01:04.156487 4980 generic.go:334] "Generic (PLEG): container finished" podID="f0d0d398-f0fc-4cec-abb7-7c5eca5254cd" containerID="de35715777dc12212c1632ec00b4b43729862b91ac57c4c0a80a8393ef90486b" exitCode=0 Jan 07 04:01:04 crc kubenswrapper[4980]: I0107 04:01:04.156575 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29462641-qspbr" event={"ID":"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd","Type":"ContainerDied","Data":"de35715777dc12212c1632ec00b4b43729862b91ac57c4c0a80a8393ef90486b"} Jan 07 04:01:05 crc kubenswrapper[4980]: I0107 04:01:05.574755 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29462641-qspbr" Jan 07 04:01:05 crc kubenswrapper[4980]: I0107 04:01:05.686503 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-fernet-keys\") pod \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\" (UID: \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\") " Jan 07 04:01:05 crc kubenswrapper[4980]: I0107 04:01:05.686673 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjsgp\" (UniqueName: \"kubernetes.io/projected/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-kube-api-access-mjsgp\") pod \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\" (UID: \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\") " Jan 07 04:01:05 crc kubenswrapper[4980]: I0107 04:01:05.686833 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-config-data\") pod \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\" (UID: \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\") " Jan 07 04:01:05 crc kubenswrapper[4980]: I0107 04:01:05.687023 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-combined-ca-bundle\") pod \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\" (UID: \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\") " Jan 07 04:01:05 crc kubenswrapper[4980]: I0107 04:01:05.693812 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-kube-api-access-mjsgp" (OuterVolumeSpecName: "kube-api-access-mjsgp") pod "f0d0d398-f0fc-4cec-abb7-7c5eca5254cd" (UID: "f0d0d398-f0fc-4cec-abb7-7c5eca5254cd"). InnerVolumeSpecName "kube-api-access-mjsgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:01:05 crc kubenswrapper[4980]: I0107 04:01:05.694172 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f0d0d398-f0fc-4cec-abb7-7c5eca5254cd" (UID: "f0d0d398-f0fc-4cec-abb7-7c5eca5254cd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:01:05 crc kubenswrapper[4980]: I0107 04:01:05.738070 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0d0d398-f0fc-4cec-abb7-7c5eca5254cd" (UID: "f0d0d398-f0fc-4cec-abb7-7c5eca5254cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:01:05 crc kubenswrapper[4980]: I0107 04:01:05.788018 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-config-data" (OuterVolumeSpecName: "config-data") pod "f0d0d398-f0fc-4cec-abb7-7c5eca5254cd" (UID: "f0d0d398-f0fc-4cec-abb7-7c5eca5254cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:01:05 crc kubenswrapper[4980]: I0107 04:01:05.788294 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-config-data\") pod \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\" (UID: \"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd\") " Jan 07 04:01:05 crc kubenswrapper[4980]: W0107 04:01:05.788479 4980 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd/volumes/kubernetes.io~secret/config-data Jan 07 04:01:05 crc kubenswrapper[4980]: I0107 04:01:05.788505 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-config-data" (OuterVolumeSpecName: "config-data") pod "f0d0d398-f0fc-4cec-abb7-7c5eca5254cd" (UID: "f0d0d398-f0fc-4cec-abb7-7c5eca5254cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:01:05 crc kubenswrapper[4980]: I0107 04:01:05.788826 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjsgp\" (UniqueName: \"kubernetes.io/projected/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-kube-api-access-mjsgp\") on node \"crc\" DevicePath \"\"" Jan 07 04:01:05 crc kubenswrapper[4980]: I0107 04:01:05.788847 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 04:01:05 crc kubenswrapper[4980]: I0107 04:01:05.788860 4980 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 04:01:05 crc kubenswrapper[4980]: I0107 04:01:05.788871 4980 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0d0d398-f0fc-4cec-abb7-7c5eca5254cd-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 07 04:01:06 crc kubenswrapper[4980]: I0107 04:01:06.189135 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29462641-qspbr" event={"ID":"f0d0d398-f0fc-4cec-abb7-7c5eca5254cd","Type":"ContainerDied","Data":"08d4af2467c318a0d52ef70bda2e47b922b24f48b9c16514ae23210fe6a0fc57"} Jan 07 04:01:06 crc kubenswrapper[4980]: I0107 04:01:06.189718 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08d4af2467c318a0d52ef70bda2e47b922b24f48b9c16514ae23210fe6a0fc57" Jan 07 04:01:06 crc kubenswrapper[4980]: I0107 04:01:06.189476 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29462641-qspbr" Jan 07 04:01:13 crc kubenswrapper[4980]: I0107 04:01:13.748602 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 04:01:13 crc kubenswrapper[4980]: E0107 04:01:13.749992 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:01:15 crc kubenswrapper[4980]: I0107 04:01:15.034961 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-9f6jz"] Jan 07 04:01:15 crc kubenswrapper[4980]: I0107 04:01:15.044814 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-9f6jz"] Jan 07 04:01:15 crc kubenswrapper[4980]: I0107 04:01:15.052768 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vxwsx"] Jan 07 04:01:15 crc kubenswrapper[4980]: I0107 04:01:15.064428 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vxwsx"] Jan 07 04:01:15 crc kubenswrapper[4980]: I0107 04:01:15.753136 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d40c84e1-0c50-4439-bfbe-469ac096cbea" path="/var/lib/kubelet/pods/d40c84e1-0c50-4439-bfbe-469ac096cbea/volumes" Jan 07 04:01:15 crc kubenswrapper[4980]: I0107 04:01:15.754717 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea978d21-b28c-4714-8f07-b70f84f0efa8" path="/var/lib/kubelet/pods/ea978d21-b28c-4714-8f07-b70f84f0efa8/volumes" Jan 07 04:01:20 crc kubenswrapper[4980]: I0107 04:01:20.047927 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-bng92"] Jan 07 04:01:20 crc kubenswrapper[4980]: I0107 04:01:20.070728 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-bng92"] Jan 07 04:01:21 crc kubenswrapper[4980]: I0107 04:01:21.747570 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc47b5ba-b1a8-4615-be80-db3ae1580399" path="/var/lib/kubelet/pods/bc47b5ba-b1a8-4615-be80-db3ae1580399/volumes" Jan 07 04:01:28 crc kubenswrapper[4980]: I0107 04:01:28.043682 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-b9flf"] Jan 07 04:01:28 crc kubenswrapper[4980]: I0107 04:01:28.061434 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-b9flf"] Jan 07 04:01:28 crc kubenswrapper[4980]: I0107 04:01:28.736713 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 04:01:28 crc kubenswrapper[4980]: E0107 04:01:28.737494 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:01:29 crc kubenswrapper[4980]: I0107 04:01:29.750127 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2df19fa0-d6ce-4539-8f87-9d6935314e82" path="/var/lib/kubelet/pods/2df19fa0-d6ce-4539-8f87-9d6935314e82/volumes" Jan 07 04:01:40 crc kubenswrapper[4980]: I0107 04:01:40.440378 4980 scope.go:117] "RemoveContainer" containerID="b247233d37522498e864a29b26636f8e7437eae0bd7181c80835032b566449b3" Jan 07 04:01:40 crc kubenswrapper[4980]: I0107 04:01:40.480668 4980 scope.go:117] "RemoveContainer" containerID="d28d5b27a74017cec149329a4e1880bc4320a466663b3ad333cf8d7fdfc3ddf4" Jan 07 04:01:40 crc kubenswrapper[4980]: I0107 04:01:40.534398 4980 scope.go:117] "RemoveContainer" containerID="6e0d55fabb87c0cf46c5fc766767b97312dbbf0f0f4ea538e1dad253737414be" Jan 07 04:01:40 crc kubenswrapper[4980]: I0107 04:01:40.592973 4980 scope.go:117] "RemoveContainer" containerID="015140773c7c52137c3f3ea9229344b379a27a4e74591ccaf71c5b0834e11658" Jan 07 04:01:40 crc kubenswrapper[4980]: I0107 04:01:40.680016 4980 scope.go:117] "RemoveContainer" containerID="ca79779095f3d5c4e3cee36263df16743fdac1a089634cdb6a5103e151bf4164" Jan 07 04:01:42 crc kubenswrapper[4980]: I0107 04:01:42.736528 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 04:01:42 crc kubenswrapper[4980]: E0107 04:01:42.737435 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:01:56 crc kubenswrapper[4980]: I0107 04:01:56.736358 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 04:01:56 crc kubenswrapper[4980]: E0107 04:01:56.737654 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:01:59 crc kubenswrapper[4980]: I0107 04:01:59.814158 4980 generic.go:334] "Generic (PLEG): container finished" podID="67224118-a228-4d50-a70e-1d675bd7df2e" containerID="05785d4b180a604a5506037946371c170ba272305ca09be1846dc981b060ad35" exitCode=0 Jan 07 04:01:59 crc kubenswrapper[4980]: I0107 04:01:59.814291 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr" event={"ID":"67224118-a228-4d50-a70e-1d675bd7df2e","Type":"ContainerDied","Data":"05785d4b180a604a5506037946371c170ba272305ca09be1846dc981b060ad35"} Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.284709 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.383758 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl6xq\" (UniqueName: \"kubernetes.io/projected/67224118-a228-4d50-a70e-1d675bd7df2e-kube-api-access-nl6xq\") pod \"67224118-a228-4d50-a70e-1d675bd7df2e\" (UID: \"67224118-a228-4d50-a70e-1d675bd7df2e\") " Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.383817 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67224118-a228-4d50-a70e-1d675bd7df2e-inventory\") pod \"67224118-a228-4d50-a70e-1d675bd7df2e\" (UID: \"67224118-a228-4d50-a70e-1d675bd7df2e\") " Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.383880 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67224118-a228-4d50-a70e-1d675bd7df2e-ssh-key-openstack-edpm-ipam\") pod \"67224118-a228-4d50-a70e-1d675bd7df2e\" (UID: \"67224118-a228-4d50-a70e-1d675bd7df2e\") " Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.389773 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67224118-a228-4d50-a70e-1d675bd7df2e-kube-api-access-nl6xq" (OuterVolumeSpecName: "kube-api-access-nl6xq") pod "67224118-a228-4d50-a70e-1d675bd7df2e" (UID: "67224118-a228-4d50-a70e-1d675bd7df2e"). InnerVolumeSpecName "kube-api-access-nl6xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.416218 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67224118-a228-4d50-a70e-1d675bd7df2e-inventory" (OuterVolumeSpecName: "inventory") pod "67224118-a228-4d50-a70e-1d675bd7df2e" (UID: "67224118-a228-4d50-a70e-1d675bd7df2e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.424749 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67224118-a228-4d50-a70e-1d675bd7df2e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "67224118-a228-4d50-a70e-1d675bd7df2e" (UID: "67224118-a228-4d50-a70e-1d675bd7df2e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.486616 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl6xq\" (UniqueName: \"kubernetes.io/projected/67224118-a228-4d50-a70e-1d675bd7df2e-kube-api-access-nl6xq\") on node \"crc\" DevicePath \"\"" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.486660 4980 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67224118-a228-4d50-a70e-1d675bd7df2e-inventory\") on node \"crc\" DevicePath \"\"" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.486672 4980 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67224118-a228-4d50-a70e-1d675bd7df2e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.836158 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr" event={"ID":"67224118-a228-4d50-a70e-1d675bd7df2e","Type":"ContainerDied","Data":"aa46a542dca99644218f5d476773f780f3d15c6c2c3710e1dc127470fa271eb0"} Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.836222 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa46a542dca99644218f5d476773f780f3d15c6c2c3710e1dc127470fa271eb0" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.836239 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7pljr" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.975059 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs"] Jan 07 04:02:01 crc kubenswrapper[4980]: E0107 04:02:01.975989 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0d0d398-f0fc-4cec-abb7-7c5eca5254cd" containerName="keystone-cron" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.976068 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0d0d398-f0fc-4cec-abb7-7c5eca5254cd" containerName="keystone-cron" Jan 07 04:02:01 crc kubenswrapper[4980]: E0107 04:02:01.976144 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67224118-a228-4d50-a70e-1d675bd7df2e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.976196 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="67224118-a228-4d50-a70e-1d675bd7df2e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.976489 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="67224118-a228-4d50-a70e-1d675bd7df2e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.976571 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0d0d398-f0fc-4cec-abb7-7c5eca5254cd" containerName="keystone-cron" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.977411 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.981876 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.982041 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.982199 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7ttf" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.982502 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.995789 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13eceb60-89ef-4f65-9639-7295976d7c72-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs\" (UID: \"13eceb60-89ef-4f65-9639-7295976d7c72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.996120 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rvsw\" (UniqueName: \"kubernetes.io/projected/13eceb60-89ef-4f65-9639-7295976d7c72-kube-api-access-8rvsw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs\" (UID: \"13eceb60-89ef-4f65-9639-7295976d7c72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs" Jan 07 04:02:01 crc kubenswrapper[4980]: I0107 04:02:01.996222 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13eceb60-89ef-4f65-9639-7295976d7c72-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs\" (UID: \"13eceb60-89ef-4f65-9639-7295976d7c72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs" Jan 07 04:02:02 crc kubenswrapper[4980]: I0107 04:02:02.002169 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs"] Jan 07 04:02:02 crc kubenswrapper[4980]: I0107 04:02:02.097859 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rvsw\" (UniqueName: \"kubernetes.io/projected/13eceb60-89ef-4f65-9639-7295976d7c72-kube-api-access-8rvsw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs\" (UID: \"13eceb60-89ef-4f65-9639-7295976d7c72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs" Jan 07 04:02:02 crc kubenswrapper[4980]: I0107 04:02:02.097913 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13eceb60-89ef-4f65-9639-7295976d7c72-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs\" (UID: \"13eceb60-89ef-4f65-9639-7295976d7c72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs" Jan 07 04:02:02 crc kubenswrapper[4980]: I0107 04:02:02.097970 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13eceb60-89ef-4f65-9639-7295976d7c72-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs\" (UID: \"13eceb60-89ef-4f65-9639-7295976d7c72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs" Jan 07 04:02:02 crc kubenswrapper[4980]: I0107 04:02:02.103531 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13eceb60-89ef-4f65-9639-7295976d7c72-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs\" (UID: \"13eceb60-89ef-4f65-9639-7295976d7c72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs" Jan 07 04:02:02 crc kubenswrapper[4980]: I0107 04:02:02.104061 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13eceb60-89ef-4f65-9639-7295976d7c72-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs\" (UID: \"13eceb60-89ef-4f65-9639-7295976d7c72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs" Jan 07 04:02:02 crc kubenswrapper[4980]: I0107 04:02:02.113673 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rvsw\" (UniqueName: \"kubernetes.io/projected/13eceb60-89ef-4f65-9639-7295976d7c72-kube-api-access-8rvsw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs\" (UID: \"13eceb60-89ef-4f65-9639-7295976d7c72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs" Jan 07 04:02:02 crc kubenswrapper[4980]: I0107 04:02:02.297837 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs" Jan 07 04:02:02 crc kubenswrapper[4980]: I0107 04:02:02.838513 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs"] Jan 07 04:02:03 crc kubenswrapper[4980]: I0107 04:02:03.856886 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs" event={"ID":"13eceb60-89ef-4f65-9639-7295976d7c72","Type":"ContainerStarted","Data":"25e7df09b4988f2376d7803ad6e8631b9a47e6f2b41366f6aba59ecb6d069f2b"} Jan 07 04:02:03 crc kubenswrapper[4980]: I0107 04:02:03.857795 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs" event={"ID":"13eceb60-89ef-4f65-9639-7295976d7c72","Type":"ContainerStarted","Data":"a91fbea87ec4054bb99ab79c33119b970378e7dbca1fa8cebbf1de993aaf9571"} Jan 07 04:02:03 crc kubenswrapper[4980]: I0107 04:02:03.882540 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs" podStartSLOduration=2.244919173 podStartE2EDuration="2.882521479s" podCreationTimestamp="2026-01-07 04:02:01 +0000 UTC" firstStartedPulling="2026-01-07 04:02:02.85411104 +0000 UTC m=+1769.419805815" lastFinishedPulling="2026-01-07 04:02:03.491713336 +0000 UTC m=+1770.057408121" observedRunningTime="2026-01-07 04:02:03.878102364 +0000 UTC m=+1770.443797139" watchObservedRunningTime="2026-01-07 04:02:03.882521479 +0000 UTC m=+1770.448216214" Jan 07 04:02:07 crc kubenswrapper[4980]: I0107 04:02:07.735862 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 04:02:07 crc kubenswrapper[4980]: E0107 04:02:07.736883 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:02:08 crc kubenswrapper[4980]: I0107 04:02:08.923319 4980 generic.go:334] "Generic (PLEG): container finished" podID="13eceb60-89ef-4f65-9639-7295976d7c72" containerID="25e7df09b4988f2376d7803ad6e8631b9a47e6f2b41366f6aba59ecb6d069f2b" exitCode=0 Jan 07 04:02:08 crc kubenswrapper[4980]: I0107 04:02:08.923468 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs" event={"ID":"13eceb60-89ef-4f65-9639-7295976d7c72","Type":"ContainerDied","Data":"25e7df09b4988f2376d7803ad6e8631b9a47e6f2b41366f6aba59ecb6d069f2b"} Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.059272 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-zcdps"] Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.077775 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-771b-account-create-update-gnf2t"] Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.089313 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-8f6ft"] Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.097393 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-jlvxp"] Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.105193 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-zcdps"] Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.113193 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-8f6ft"] Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.121527 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8ad2-account-create-update-wglzx"] Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.130503 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-771b-account-create-update-gnf2t"] Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.138576 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-jlvxp"] Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.147418 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8ad2-account-create-update-wglzx"] Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.448594 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs" Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.587607 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13eceb60-89ef-4f65-9639-7295976d7c72-ssh-key-openstack-edpm-ipam\") pod \"13eceb60-89ef-4f65-9639-7295976d7c72\" (UID: \"13eceb60-89ef-4f65-9639-7295976d7c72\") " Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.587690 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rvsw\" (UniqueName: \"kubernetes.io/projected/13eceb60-89ef-4f65-9639-7295976d7c72-kube-api-access-8rvsw\") pod \"13eceb60-89ef-4f65-9639-7295976d7c72\" (UID: \"13eceb60-89ef-4f65-9639-7295976d7c72\") " Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.587745 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13eceb60-89ef-4f65-9639-7295976d7c72-inventory\") pod \"13eceb60-89ef-4f65-9639-7295976d7c72\" (UID: \"13eceb60-89ef-4f65-9639-7295976d7c72\") " Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.595616 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13eceb60-89ef-4f65-9639-7295976d7c72-kube-api-access-8rvsw" (OuterVolumeSpecName: "kube-api-access-8rvsw") pod "13eceb60-89ef-4f65-9639-7295976d7c72" (UID: "13eceb60-89ef-4f65-9639-7295976d7c72"). InnerVolumeSpecName "kube-api-access-8rvsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.630147 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13eceb60-89ef-4f65-9639-7295976d7c72-inventory" (OuterVolumeSpecName: "inventory") pod "13eceb60-89ef-4f65-9639-7295976d7c72" (UID: "13eceb60-89ef-4f65-9639-7295976d7c72"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.632040 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13eceb60-89ef-4f65-9639-7295976d7c72-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "13eceb60-89ef-4f65-9639-7295976d7c72" (UID: "13eceb60-89ef-4f65-9639-7295976d7c72"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.690921 4980 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13eceb60-89ef-4f65-9639-7295976d7c72-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.690970 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rvsw\" (UniqueName: \"kubernetes.io/projected/13eceb60-89ef-4f65-9639-7295976d7c72-kube-api-access-8rvsw\") on node \"crc\" DevicePath \"\"" Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.690988 4980 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13eceb60-89ef-4f65-9639-7295976d7c72-inventory\") on node \"crc\" DevicePath \"\"" Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.958411 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs" event={"ID":"13eceb60-89ef-4f65-9639-7295976d7c72","Type":"ContainerDied","Data":"a91fbea87ec4054bb99ab79c33119b970378e7dbca1fa8cebbf1de993aaf9571"} Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.958470 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a91fbea87ec4054bb99ab79c33119b970378e7dbca1fa8cebbf1de993aaf9571" Jan 07 04:02:10 crc kubenswrapper[4980]: I0107 04:02:10.958579 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.039916 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt"] Jan 07 04:02:11 crc kubenswrapper[4980]: E0107 04:02:11.040355 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13eceb60-89ef-4f65-9639-7295976d7c72" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.040374 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="13eceb60-89ef-4f65-9639-7295976d7c72" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.040597 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="13eceb60-89ef-4f65-9639-7295976d7c72" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.041251 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.043372 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.043538 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.043884 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7ttf" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.047162 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.051351 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt"] Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.072728 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-f4c7-account-create-update-vp8hz"] Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.083754 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-f4c7-account-create-update-vp8hz"] Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.098935 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/accf2eeb-147d-49a3-8aa3-06d9e52a2fb4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-r96jt\" (UID: \"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.099028 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/accf2eeb-147d-49a3-8aa3-06d9e52a2fb4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-r96jt\" (UID: \"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.099055 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvkhk\" (UniqueName: \"kubernetes.io/projected/accf2eeb-147d-49a3-8aa3-06d9e52a2fb4-kube-api-access-lvkhk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-r96jt\" (UID: \"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.201621 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/accf2eeb-147d-49a3-8aa3-06d9e52a2fb4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-r96jt\" (UID: \"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.201707 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/accf2eeb-147d-49a3-8aa3-06d9e52a2fb4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-r96jt\" (UID: \"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.201741 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvkhk\" (UniqueName: \"kubernetes.io/projected/accf2eeb-147d-49a3-8aa3-06d9e52a2fb4-kube-api-access-lvkhk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-r96jt\" (UID: \"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.207095 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/accf2eeb-147d-49a3-8aa3-06d9e52a2fb4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-r96jt\" (UID: \"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.210435 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/accf2eeb-147d-49a3-8aa3-06d9e52a2fb4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-r96jt\" (UID: \"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.228328 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvkhk\" (UniqueName: \"kubernetes.io/projected/accf2eeb-147d-49a3-8aa3-06d9e52a2fb4-kube-api-access-lvkhk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-r96jt\" (UID: \"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.356990 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.755240 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16c025a9-1e74-4b76-aa08-b409e8ebfeda" path="/var/lib/kubelet/pods/16c025a9-1e74-4b76-aa08-b409e8ebfeda/volumes" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.757265 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6626e80c-7ae0-45cc-a3f3-77d55d176b86" path="/var/lib/kubelet/pods/6626e80c-7ae0-45cc-a3f3-77d55d176b86/volumes" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.758689 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68715bdb-bc11-4440-9be6-9399c45ff882" path="/var/lib/kubelet/pods/68715bdb-bc11-4440-9be6-9399c45ff882/volumes" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.760249 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ae9ab52-7706-4348-ba49-0c7d0e884dda" path="/var/lib/kubelet/pods/7ae9ab52-7706-4348-ba49-0c7d0e884dda/volumes" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.762613 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7d18756-8ad4-44c7-8ff0-5cd67e7d8585" path="/var/lib/kubelet/pods/e7d18756-8ad4-44c7-8ff0-5cd67e7d8585/volumes" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.764095 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e7d535-d7b0-4742-9f6a-f45e56965313" path="/var/lib/kubelet/pods/f8e7d535-d7b0-4742-9f6a-f45e56965313/volumes" Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.920885 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt"] Jan 07 04:02:11 crc kubenswrapper[4980]: W0107 04:02:11.932308 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaccf2eeb_147d_49a3_8aa3_06d9e52a2fb4.slice/crio-51ce022c3090ca36554fd78e6809b772187cdaa5d5ba446bdc62431763d9a8f7 WatchSource:0}: Error finding container 51ce022c3090ca36554fd78e6809b772187cdaa5d5ba446bdc62431763d9a8f7: Status 404 returned error can't find the container with id 51ce022c3090ca36554fd78e6809b772187cdaa5d5ba446bdc62431763d9a8f7 Jan 07 04:02:11 crc kubenswrapper[4980]: I0107 04:02:11.969005 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt" event={"ID":"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4","Type":"ContainerStarted","Data":"51ce022c3090ca36554fd78e6809b772187cdaa5d5ba446bdc62431763d9a8f7"} Jan 07 04:02:12 crc kubenswrapper[4980]: I0107 04:02:12.983411 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt" event={"ID":"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4","Type":"ContainerStarted","Data":"18e300efe973acf34243ff7087001be461dbf7a326d8d61a52034c48e2cbeccc"} Jan 07 04:02:18 crc kubenswrapper[4980]: I0107 04:02:18.736270 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 04:02:18 crc kubenswrapper[4980]: E0107 04:02:18.737432 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:02:31 crc kubenswrapper[4980]: I0107 04:02:31.736634 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 04:02:31 crc kubenswrapper[4980]: E0107 04:02:31.737788 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:02:38 crc kubenswrapper[4980]: I0107 04:02:38.051226 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt" podStartSLOduration=26.603051672 podStartE2EDuration="27.051201808s" podCreationTimestamp="2026-01-07 04:02:11 +0000 UTC" firstStartedPulling="2026-01-07 04:02:11.935694587 +0000 UTC m=+1778.501389322" lastFinishedPulling="2026-01-07 04:02:12.383844713 +0000 UTC m=+1778.949539458" observedRunningTime="2026-01-07 04:02:13.018124117 +0000 UTC m=+1779.583818862" watchObservedRunningTime="2026-01-07 04:02:38.051201808 +0000 UTC m=+1804.616896573" Jan 07 04:02:38 crc kubenswrapper[4980]: I0107 04:02:38.057898 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lfwjd"] Jan 07 04:02:38 crc kubenswrapper[4980]: I0107 04:02:38.074032 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lfwjd"] Jan 07 04:02:39 crc kubenswrapper[4980]: I0107 04:02:39.751946 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6dbf45b-58e2-4fa6-a127-d604586a3b44" path="/var/lib/kubelet/pods/c6dbf45b-58e2-4fa6-a127-d604586a3b44/volumes" Jan 07 04:02:40 crc kubenswrapper[4980]: I0107 04:02:40.896829 4980 scope.go:117] "RemoveContainer" containerID="78f30ae585c932568adc81c9cf47658a33fb996895684575c0b7fe38fb5a0e4c" Jan 07 04:02:40 crc kubenswrapper[4980]: I0107 04:02:40.964286 4980 scope.go:117] "RemoveContainer" containerID="d54a85d19796047450e3f1ba149298749348948eb059cf069b534f96502e6ed6" Jan 07 04:02:40 crc kubenswrapper[4980]: I0107 04:02:40.986497 4980 scope.go:117] "RemoveContainer" containerID="fff9fde2317928b1ed7c5f1bd8ec2077e6898f82076a6bf6ae4bb5f8c70e0746" Jan 07 04:02:41 crc kubenswrapper[4980]: I0107 04:02:41.033270 4980 scope.go:117] "RemoveContainer" containerID="ff198520c6314d0b50ef700eebd3491f957d7650c8ce83452134a91b06a51184" Jan 07 04:02:41 crc kubenswrapper[4980]: I0107 04:02:41.096104 4980 scope.go:117] "RemoveContainer" containerID="3d3ef7ea01590bfe0a140cd40b0526247be5d8f57d38a2bfb49e02e8b294e640" Jan 07 04:02:41 crc kubenswrapper[4980]: I0107 04:02:41.172308 4980 scope.go:117] "RemoveContainer" containerID="97976152a9675040bf5595084c2cd220d312a253cf0b6d407011a9f3e11a4ba3" Jan 07 04:02:41 crc kubenswrapper[4980]: I0107 04:02:41.218292 4980 scope.go:117] "RemoveContainer" containerID="9b7677ed8a7edda305985be3d206c3f68e6dbf6e401f01e72e94d071f1fc969c" Jan 07 04:02:42 crc kubenswrapper[4980]: I0107 04:02:42.736085 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 04:02:43 crc kubenswrapper[4980]: I0107 04:02:43.317353 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"d1ed879a83db37a20bc52acd9903f3aabf2339db3829b25c95e9a6e9ed19722c"} Jan 07 04:02:53 crc kubenswrapper[4980]: I0107 04:02:53.417219 4980 generic.go:334] "Generic (PLEG): container finished" podID="accf2eeb-147d-49a3-8aa3-06d9e52a2fb4" containerID="18e300efe973acf34243ff7087001be461dbf7a326d8d61a52034c48e2cbeccc" exitCode=0 Jan 07 04:02:53 crc kubenswrapper[4980]: I0107 04:02:53.417403 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt" event={"ID":"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4","Type":"ContainerDied","Data":"18e300efe973acf34243ff7087001be461dbf7a326d8d61a52034c48e2cbeccc"} Jan 07 04:02:54 crc kubenswrapper[4980]: I0107 04:02:54.976416 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.091941 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/accf2eeb-147d-49a3-8aa3-06d9e52a2fb4-inventory\") pod \"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4\" (UID: \"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4\") " Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.092131 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvkhk\" (UniqueName: \"kubernetes.io/projected/accf2eeb-147d-49a3-8aa3-06d9e52a2fb4-kube-api-access-lvkhk\") pod \"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4\" (UID: \"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4\") " Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.092226 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/accf2eeb-147d-49a3-8aa3-06d9e52a2fb4-ssh-key-openstack-edpm-ipam\") pod \"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4\" (UID: \"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4\") " Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.098488 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/accf2eeb-147d-49a3-8aa3-06d9e52a2fb4-kube-api-access-lvkhk" (OuterVolumeSpecName: "kube-api-access-lvkhk") pod "accf2eeb-147d-49a3-8aa3-06d9e52a2fb4" (UID: "accf2eeb-147d-49a3-8aa3-06d9e52a2fb4"). InnerVolumeSpecName "kube-api-access-lvkhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.120725 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/accf2eeb-147d-49a3-8aa3-06d9e52a2fb4-inventory" (OuterVolumeSpecName: "inventory") pod "accf2eeb-147d-49a3-8aa3-06d9e52a2fb4" (UID: "accf2eeb-147d-49a3-8aa3-06d9e52a2fb4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.131061 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/accf2eeb-147d-49a3-8aa3-06d9e52a2fb4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "accf2eeb-147d-49a3-8aa3-06d9e52a2fb4" (UID: "accf2eeb-147d-49a3-8aa3-06d9e52a2fb4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.195288 4980 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/accf2eeb-147d-49a3-8aa3-06d9e52a2fb4-inventory\") on node \"crc\" DevicePath \"\"" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.195641 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvkhk\" (UniqueName: \"kubernetes.io/projected/accf2eeb-147d-49a3-8aa3-06d9e52a2fb4-kube-api-access-lvkhk\") on node \"crc\" DevicePath \"\"" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.195792 4980 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/accf2eeb-147d-49a3-8aa3-06d9e52a2fb4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.445235 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt" event={"ID":"accf2eeb-147d-49a3-8aa3-06d9e52a2fb4","Type":"ContainerDied","Data":"51ce022c3090ca36554fd78e6809b772187cdaa5d5ba446bdc62431763d9a8f7"} Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.445522 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51ce022c3090ca36554fd78e6809b772187cdaa5d5ba446bdc62431763d9a8f7" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.445363 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r96jt" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.576713 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg"] Jan 07 04:02:55 crc kubenswrapper[4980]: E0107 04:02:55.577166 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="accf2eeb-147d-49a3-8aa3-06d9e52a2fb4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.577182 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="accf2eeb-147d-49a3-8aa3-06d9e52a2fb4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.577378 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="accf2eeb-147d-49a3-8aa3-06d9e52a2fb4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.578518 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.589565 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg"] Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.639013 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7ttf" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.639236 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.639358 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.639477 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.705569 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh97w\" (UniqueName: \"kubernetes.io/projected/553631f5-8b26-4a24-bc27-cdbf1ad869db-kube-api-access-nh97w\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg\" (UID: \"553631f5-8b26-4a24-bc27-cdbf1ad869db\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.705715 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/553631f5-8b26-4a24-bc27-cdbf1ad869db-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg\" (UID: \"553631f5-8b26-4a24-bc27-cdbf1ad869db\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.705734 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/553631f5-8b26-4a24-bc27-cdbf1ad869db-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg\" (UID: \"553631f5-8b26-4a24-bc27-cdbf1ad869db\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.808906 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/553631f5-8b26-4a24-bc27-cdbf1ad869db-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg\" (UID: \"553631f5-8b26-4a24-bc27-cdbf1ad869db\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.809845 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/553631f5-8b26-4a24-bc27-cdbf1ad869db-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg\" (UID: \"553631f5-8b26-4a24-bc27-cdbf1ad869db\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.810199 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh97w\" (UniqueName: \"kubernetes.io/projected/553631f5-8b26-4a24-bc27-cdbf1ad869db-kube-api-access-nh97w\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg\" (UID: \"553631f5-8b26-4a24-bc27-cdbf1ad869db\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.815164 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/553631f5-8b26-4a24-bc27-cdbf1ad869db-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg\" (UID: \"553631f5-8b26-4a24-bc27-cdbf1ad869db\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.818742 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/553631f5-8b26-4a24-bc27-cdbf1ad869db-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg\" (UID: \"553631f5-8b26-4a24-bc27-cdbf1ad869db\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.843026 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh97w\" (UniqueName: \"kubernetes.io/projected/553631f5-8b26-4a24-bc27-cdbf1ad869db-kube-api-access-nh97w\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg\" (UID: \"553631f5-8b26-4a24-bc27-cdbf1ad869db\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg" Jan 07 04:02:55 crc kubenswrapper[4980]: I0107 04:02:55.951782 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg" Jan 07 04:02:56 crc kubenswrapper[4980]: I0107 04:02:56.535526 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg"] Jan 07 04:02:56 crc kubenswrapper[4980]: W0107 04:02:56.543717 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod553631f5_8b26_4a24_bc27_cdbf1ad869db.slice/crio-9ae9cf5d679089be61d00d90b0b666f863a67e1fac7d2841da95ea403adc898d WatchSource:0}: Error finding container 9ae9cf5d679089be61d00d90b0b666f863a67e1fac7d2841da95ea403adc898d: Status 404 returned error can't find the container with id 9ae9cf5d679089be61d00d90b0b666f863a67e1fac7d2841da95ea403adc898d Jan 07 04:02:57 crc kubenswrapper[4980]: I0107 04:02:57.473184 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg" event={"ID":"553631f5-8b26-4a24-bc27-cdbf1ad869db","Type":"ContainerStarted","Data":"56ec6e24a34f9daf36638ebe0365a19a56916a11782aa9ffd56a8325a1513214"} Jan 07 04:02:57 crc kubenswrapper[4980]: I0107 04:02:57.473616 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg" event={"ID":"553631f5-8b26-4a24-bc27-cdbf1ad869db","Type":"ContainerStarted","Data":"9ae9cf5d679089be61d00d90b0b666f863a67e1fac7d2841da95ea403adc898d"} Jan 07 04:02:57 crc kubenswrapper[4980]: I0107 04:02:57.505107 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg" podStartSLOduration=2.001071004 podStartE2EDuration="2.505082447s" podCreationTimestamp="2026-01-07 04:02:55 +0000 UTC" firstStartedPulling="2026-01-07 04:02:56.547632517 +0000 UTC m=+1823.113327262" lastFinishedPulling="2026-01-07 04:02:57.05164397 +0000 UTC m=+1823.617338705" observedRunningTime="2026-01-07 04:02:57.494037709 +0000 UTC m=+1824.059732484" watchObservedRunningTime="2026-01-07 04:02:57.505082447 +0000 UTC m=+1824.070777222" Jan 07 04:03:00 crc kubenswrapper[4980]: I0107 04:03:00.039495 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-pm66v"] Jan 07 04:03:00 crc kubenswrapper[4980]: I0107 04:03:00.049171 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-pm66v"] Jan 07 04:03:01 crc kubenswrapper[4980]: I0107 04:03:01.035906 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sg7zf"] Jan 07 04:03:01 crc kubenswrapper[4980]: I0107 04:03:01.045110 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sg7zf"] Jan 07 04:03:01 crc kubenswrapper[4980]: I0107 04:03:01.757152 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38b876f0-a9bd-4b8c-97e0-ea68e0c935f8" path="/var/lib/kubelet/pods/38b876f0-a9bd-4b8c-97e0-ea68e0c935f8/volumes" Jan 07 04:03:01 crc kubenswrapper[4980]: I0107 04:03:01.758699 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="762f824e-4099-41b1-ab8a-e20b9773b8a9" path="/var/lib/kubelet/pods/762f824e-4099-41b1-ab8a-e20b9773b8a9/volumes" Jan 07 04:03:41 crc kubenswrapper[4980]: I0107 04:03:41.378869 4980 scope.go:117] "RemoveContainer" containerID="b564609c3ab74803d93f65393773b9eea0fd3ecbe9ce176c569e62102baf26bb" Jan 07 04:03:41 crc kubenswrapper[4980]: I0107 04:03:41.443812 4980 scope.go:117] "RemoveContainer" containerID="4dbe64f9ca82a1e834cecba63a09d7a009e3b4cb22f25c38063ba33000e34dde" Jan 07 04:03:46 crc kubenswrapper[4980]: I0107 04:03:46.063395 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-z444m"] Jan 07 04:03:46 crc kubenswrapper[4980]: I0107 04:03:46.081052 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-z444m"] Jan 07 04:03:47 crc kubenswrapper[4980]: I0107 04:03:47.756326 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="966ffc7e-1827-4cea-b4f1-f820b4e41986" path="/var/lib/kubelet/pods/966ffc7e-1827-4cea-b4f1-f820b4e41986/volumes" Jan 07 04:03:56 crc kubenswrapper[4980]: I0107 04:03:56.149634 4980 generic.go:334] "Generic (PLEG): container finished" podID="553631f5-8b26-4a24-bc27-cdbf1ad869db" containerID="56ec6e24a34f9daf36638ebe0365a19a56916a11782aa9ffd56a8325a1513214" exitCode=0 Jan 07 04:03:56 crc kubenswrapper[4980]: I0107 04:03:56.150520 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg" event={"ID":"553631f5-8b26-4a24-bc27-cdbf1ad869db","Type":"ContainerDied","Data":"56ec6e24a34f9daf36638ebe0365a19a56916a11782aa9ffd56a8325a1513214"} Jan 07 04:03:57 crc kubenswrapper[4980]: I0107 04:03:57.602976 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg" Jan 07 04:03:57 crc kubenswrapper[4980]: I0107 04:03:57.669022 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh97w\" (UniqueName: \"kubernetes.io/projected/553631f5-8b26-4a24-bc27-cdbf1ad869db-kube-api-access-nh97w\") pod \"553631f5-8b26-4a24-bc27-cdbf1ad869db\" (UID: \"553631f5-8b26-4a24-bc27-cdbf1ad869db\") " Jan 07 04:03:57 crc kubenswrapper[4980]: I0107 04:03:57.669178 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/553631f5-8b26-4a24-bc27-cdbf1ad869db-ssh-key-openstack-edpm-ipam\") pod \"553631f5-8b26-4a24-bc27-cdbf1ad869db\" (UID: \"553631f5-8b26-4a24-bc27-cdbf1ad869db\") " Jan 07 04:03:57 crc kubenswrapper[4980]: I0107 04:03:57.669216 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/553631f5-8b26-4a24-bc27-cdbf1ad869db-inventory\") pod \"553631f5-8b26-4a24-bc27-cdbf1ad869db\" (UID: \"553631f5-8b26-4a24-bc27-cdbf1ad869db\") " Jan 07 04:03:57 crc kubenswrapper[4980]: I0107 04:03:57.674471 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/553631f5-8b26-4a24-bc27-cdbf1ad869db-kube-api-access-nh97w" (OuterVolumeSpecName: "kube-api-access-nh97w") pod "553631f5-8b26-4a24-bc27-cdbf1ad869db" (UID: "553631f5-8b26-4a24-bc27-cdbf1ad869db"). InnerVolumeSpecName "kube-api-access-nh97w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:03:57 crc kubenswrapper[4980]: I0107 04:03:57.698772 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553631f5-8b26-4a24-bc27-cdbf1ad869db-inventory" (OuterVolumeSpecName: "inventory") pod "553631f5-8b26-4a24-bc27-cdbf1ad869db" (UID: "553631f5-8b26-4a24-bc27-cdbf1ad869db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:03:57 crc kubenswrapper[4980]: I0107 04:03:57.706443 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553631f5-8b26-4a24-bc27-cdbf1ad869db-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "553631f5-8b26-4a24-bc27-cdbf1ad869db" (UID: "553631f5-8b26-4a24-bc27-cdbf1ad869db"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:03:57 crc kubenswrapper[4980]: I0107 04:03:57.771223 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh97w\" (UniqueName: \"kubernetes.io/projected/553631f5-8b26-4a24-bc27-cdbf1ad869db-kube-api-access-nh97w\") on node \"crc\" DevicePath \"\"" Jan 07 04:03:57 crc kubenswrapper[4980]: I0107 04:03:57.771262 4980 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/553631f5-8b26-4a24-bc27-cdbf1ad869db-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 07 04:03:57 crc kubenswrapper[4980]: I0107 04:03:57.771275 4980 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/553631f5-8b26-4a24-bc27-cdbf1ad869db-inventory\") on node \"crc\" DevicePath \"\"" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.174430 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg" event={"ID":"553631f5-8b26-4a24-bc27-cdbf1ad869db","Type":"ContainerDied","Data":"9ae9cf5d679089be61d00d90b0b666f863a67e1fac7d2841da95ea403adc898d"} Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.174825 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ae9cf5d679089be61d00d90b0b666f863a67e1fac7d2841da95ea403adc898d" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.174601 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.277032 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-q252v"] Jan 07 04:03:58 crc kubenswrapper[4980]: E0107 04:03:58.277460 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553631f5-8b26-4a24-bc27-cdbf1ad869db" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.277482 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="553631f5-8b26-4a24-bc27-cdbf1ad869db" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.277705 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="553631f5-8b26-4a24-bc27-cdbf1ad869db" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.278266 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-q252v" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.282924 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7ttf" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.282969 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.282988 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.282977 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.297779 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-q252v"] Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.381441 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/657c1546-50a5-49f6-9db2-a85ade05e059-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-q252v\" (UID: \"657c1546-50a5-49f6-9db2-a85ade05e059\") " pod="openstack/ssh-known-hosts-edpm-deployment-q252v" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.381502 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/657c1546-50a5-49f6-9db2-a85ade05e059-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-q252v\" (UID: \"657c1546-50a5-49f6-9db2-a85ade05e059\") " pod="openstack/ssh-known-hosts-edpm-deployment-q252v" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.381729 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkhhk\" (UniqueName: \"kubernetes.io/projected/657c1546-50a5-49f6-9db2-a85ade05e059-kube-api-access-lkhhk\") pod \"ssh-known-hosts-edpm-deployment-q252v\" (UID: \"657c1546-50a5-49f6-9db2-a85ade05e059\") " pod="openstack/ssh-known-hosts-edpm-deployment-q252v" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.483679 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkhhk\" (UniqueName: \"kubernetes.io/projected/657c1546-50a5-49f6-9db2-a85ade05e059-kube-api-access-lkhhk\") pod \"ssh-known-hosts-edpm-deployment-q252v\" (UID: \"657c1546-50a5-49f6-9db2-a85ade05e059\") " pod="openstack/ssh-known-hosts-edpm-deployment-q252v" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.483916 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/657c1546-50a5-49f6-9db2-a85ade05e059-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-q252v\" (UID: \"657c1546-50a5-49f6-9db2-a85ade05e059\") " pod="openstack/ssh-known-hosts-edpm-deployment-q252v" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.483963 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/657c1546-50a5-49f6-9db2-a85ade05e059-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-q252v\" (UID: \"657c1546-50a5-49f6-9db2-a85ade05e059\") " pod="openstack/ssh-known-hosts-edpm-deployment-q252v" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.489857 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/657c1546-50a5-49f6-9db2-a85ade05e059-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-q252v\" (UID: \"657c1546-50a5-49f6-9db2-a85ade05e059\") " pod="openstack/ssh-known-hosts-edpm-deployment-q252v" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.490391 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/657c1546-50a5-49f6-9db2-a85ade05e059-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-q252v\" (UID: \"657c1546-50a5-49f6-9db2-a85ade05e059\") " pod="openstack/ssh-known-hosts-edpm-deployment-q252v" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.500618 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkhhk\" (UniqueName: \"kubernetes.io/projected/657c1546-50a5-49f6-9db2-a85ade05e059-kube-api-access-lkhhk\") pod \"ssh-known-hosts-edpm-deployment-q252v\" (UID: \"657c1546-50a5-49f6-9db2-a85ade05e059\") " pod="openstack/ssh-known-hosts-edpm-deployment-q252v" Jan 07 04:03:58 crc kubenswrapper[4980]: I0107 04:03:58.603320 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-q252v" Jan 07 04:03:59 crc kubenswrapper[4980]: I0107 04:03:59.242315 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-q252v"] Jan 07 04:03:59 crc kubenswrapper[4980]: W0107 04:03:59.260907 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod657c1546_50a5_49f6_9db2_a85ade05e059.slice/crio-678a2b28f06aa9f00b19bf151700c32c1f6bfe358430919ca27a6a81bc8a1191 WatchSource:0}: Error finding container 678a2b28f06aa9f00b19bf151700c32c1f6bfe358430919ca27a6a81bc8a1191: Status 404 returned error can't find the container with id 678a2b28f06aa9f00b19bf151700c32c1f6bfe358430919ca27a6a81bc8a1191 Jan 07 04:03:59 crc kubenswrapper[4980]: I0107 04:03:59.264648 4980 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 04:04:00 crc kubenswrapper[4980]: I0107 04:04:00.231672 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-q252v" event={"ID":"657c1546-50a5-49f6-9db2-a85ade05e059","Type":"ContainerStarted","Data":"678a2b28f06aa9f00b19bf151700c32c1f6bfe358430919ca27a6a81bc8a1191"} Jan 07 04:04:01 crc kubenswrapper[4980]: I0107 04:04:01.242227 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-q252v" event={"ID":"657c1546-50a5-49f6-9db2-a85ade05e059","Type":"ContainerStarted","Data":"c3f5dd58f50d60da50853a35a15d93de6c029944e967cdad791f23c4270723ff"} Jan 07 04:04:01 crc kubenswrapper[4980]: I0107 04:04:01.266054 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-q252v" podStartSLOduration=2.469720062 podStartE2EDuration="3.266027972s" podCreationTimestamp="2026-01-07 04:03:58 +0000 UTC" firstStartedPulling="2026-01-07 04:03:59.264214441 +0000 UTC m=+1885.829909186" lastFinishedPulling="2026-01-07 04:04:00.060522351 +0000 UTC m=+1886.626217096" observedRunningTime="2026-01-07 04:04:01.261756404 +0000 UTC m=+1887.827451139" watchObservedRunningTime="2026-01-07 04:04:01.266027972 +0000 UTC m=+1887.831722747" Jan 07 04:04:08 crc kubenswrapper[4980]: I0107 04:04:08.315672 4980 generic.go:334] "Generic (PLEG): container finished" podID="657c1546-50a5-49f6-9db2-a85ade05e059" containerID="c3f5dd58f50d60da50853a35a15d93de6c029944e967cdad791f23c4270723ff" exitCode=0 Jan 07 04:04:08 crc kubenswrapper[4980]: I0107 04:04:08.315821 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-q252v" event={"ID":"657c1546-50a5-49f6-9db2-a85ade05e059","Type":"ContainerDied","Data":"c3f5dd58f50d60da50853a35a15d93de6c029944e967cdad791f23c4270723ff"} Jan 07 04:04:09 crc kubenswrapper[4980]: I0107 04:04:09.806144 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-q252v" Jan 07 04:04:09 crc kubenswrapper[4980]: I0107 04:04:09.912256 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/657c1546-50a5-49f6-9db2-a85ade05e059-ssh-key-openstack-edpm-ipam\") pod \"657c1546-50a5-49f6-9db2-a85ade05e059\" (UID: \"657c1546-50a5-49f6-9db2-a85ade05e059\") " Jan 07 04:04:09 crc kubenswrapper[4980]: I0107 04:04:09.912517 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkhhk\" (UniqueName: \"kubernetes.io/projected/657c1546-50a5-49f6-9db2-a85ade05e059-kube-api-access-lkhhk\") pod \"657c1546-50a5-49f6-9db2-a85ade05e059\" (UID: \"657c1546-50a5-49f6-9db2-a85ade05e059\") " Jan 07 04:04:09 crc kubenswrapper[4980]: I0107 04:04:09.912584 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/657c1546-50a5-49f6-9db2-a85ade05e059-inventory-0\") pod \"657c1546-50a5-49f6-9db2-a85ade05e059\" (UID: \"657c1546-50a5-49f6-9db2-a85ade05e059\") " Jan 07 04:04:09 crc kubenswrapper[4980]: I0107 04:04:09.918629 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/657c1546-50a5-49f6-9db2-a85ade05e059-kube-api-access-lkhhk" (OuterVolumeSpecName: "kube-api-access-lkhhk") pod "657c1546-50a5-49f6-9db2-a85ade05e059" (UID: "657c1546-50a5-49f6-9db2-a85ade05e059"). InnerVolumeSpecName "kube-api-access-lkhhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:04:09 crc kubenswrapper[4980]: I0107 04:04:09.944396 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/657c1546-50a5-49f6-9db2-a85ade05e059-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "657c1546-50a5-49f6-9db2-a85ade05e059" (UID: "657c1546-50a5-49f6-9db2-a85ade05e059"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:04:09 crc kubenswrapper[4980]: I0107 04:04:09.947078 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/657c1546-50a5-49f6-9db2-a85ade05e059-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "657c1546-50a5-49f6-9db2-a85ade05e059" (UID: "657c1546-50a5-49f6-9db2-a85ade05e059"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.015097 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkhhk\" (UniqueName: \"kubernetes.io/projected/657c1546-50a5-49f6-9db2-a85ade05e059-kube-api-access-lkhhk\") on node \"crc\" DevicePath \"\"" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.015139 4980 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/657c1546-50a5-49f6-9db2-a85ade05e059-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.015154 4980 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/657c1546-50a5-49f6-9db2-a85ade05e059-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.345660 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-q252v" event={"ID":"657c1546-50a5-49f6-9db2-a85ade05e059","Type":"ContainerDied","Data":"678a2b28f06aa9f00b19bf151700c32c1f6bfe358430919ca27a6a81bc8a1191"} Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.345919 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-q252v" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.345942 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="678a2b28f06aa9f00b19bf151700c32c1f6bfe358430919ca27a6a81bc8a1191" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.512626 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787"] Jan 07 04:04:10 crc kubenswrapper[4980]: E0107 04:04:10.513104 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657c1546-50a5-49f6-9db2-a85ade05e059" containerName="ssh-known-hosts-edpm-deployment" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.513126 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="657c1546-50a5-49f6-9db2-a85ade05e059" containerName="ssh-known-hosts-edpm-deployment" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.513395 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="657c1546-50a5-49f6-9db2-a85ade05e059" containerName="ssh-known-hosts-edpm-deployment" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.514139 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.516632 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.516992 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.519205 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.519546 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7ttf" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.535616 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787"] Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.626046 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntjh9\" (UniqueName: \"kubernetes.io/projected/841894d0-7f26-4642-ac09-1395082e288e-kube-api-access-ntjh9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tv787\" (UID: \"841894d0-7f26-4642-ac09-1395082e288e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.626187 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/841894d0-7f26-4642-ac09-1395082e288e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tv787\" (UID: \"841894d0-7f26-4642-ac09-1395082e288e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.626283 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/841894d0-7f26-4642-ac09-1395082e288e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tv787\" (UID: \"841894d0-7f26-4642-ac09-1395082e288e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.729170 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/841894d0-7f26-4642-ac09-1395082e288e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tv787\" (UID: \"841894d0-7f26-4642-ac09-1395082e288e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.729446 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntjh9\" (UniqueName: \"kubernetes.io/projected/841894d0-7f26-4642-ac09-1395082e288e-kube-api-access-ntjh9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tv787\" (UID: \"841894d0-7f26-4642-ac09-1395082e288e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.729547 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/841894d0-7f26-4642-ac09-1395082e288e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tv787\" (UID: \"841894d0-7f26-4642-ac09-1395082e288e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.735617 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/841894d0-7f26-4642-ac09-1395082e288e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tv787\" (UID: \"841894d0-7f26-4642-ac09-1395082e288e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.737542 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/841894d0-7f26-4642-ac09-1395082e288e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tv787\" (UID: \"841894d0-7f26-4642-ac09-1395082e288e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.764792 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntjh9\" (UniqueName: \"kubernetes.io/projected/841894d0-7f26-4642-ac09-1395082e288e-kube-api-access-ntjh9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tv787\" (UID: \"841894d0-7f26-4642-ac09-1395082e288e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787" Jan 07 04:04:10 crc kubenswrapper[4980]: I0107 04:04:10.847969 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787" Jan 07 04:04:11 crc kubenswrapper[4980]: I0107 04:04:11.510202 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787"] Jan 07 04:04:12 crc kubenswrapper[4980]: I0107 04:04:12.364685 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787" event={"ID":"841894d0-7f26-4642-ac09-1395082e288e","Type":"ContainerStarted","Data":"8ac4820899754b0fa58e69dbc5ec346a10e245ade5fadd67d0c8dcb9d70aabf6"} Jan 07 04:04:13 crc kubenswrapper[4980]: I0107 04:04:13.377451 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787" event={"ID":"841894d0-7f26-4642-ac09-1395082e288e","Type":"ContainerStarted","Data":"ab2c18503fb25c921d35aa3cee7c5bd2834990352b917dda89b58e7b0fa8ebb4"} Jan 07 04:04:13 crc kubenswrapper[4980]: I0107 04:04:13.397482 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787" podStartSLOduration=2.901438515 podStartE2EDuration="3.397460083s" podCreationTimestamp="2026-01-07 04:04:10 +0000 UTC" firstStartedPulling="2026-01-07 04:04:11.520290478 +0000 UTC m=+1898.085985223" lastFinishedPulling="2026-01-07 04:04:12.016312046 +0000 UTC m=+1898.582006791" observedRunningTime="2026-01-07 04:04:13.395447462 +0000 UTC m=+1899.961142227" watchObservedRunningTime="2026-01-07 04:04:13.397460083 +0000 UTC m=+1899.963154828" Jan 07 04:04:22 crc kubenswrapper[4980]: I0107 04:04:22.475968 4980 generic.go:334] "Generic (PLEG): container finished" podID="841894d0-7f26-4642-ac09-1395082e288e" containerID="ab2c18503fb25c921d35aa3cee7c5bd2834990352b917dda89b58e7b0fa8ebb4" exitCode=0 Jan 07 04:04:22 crc kubenswrapper[4980]: I0107 04:04:22.476111 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787" event={"ID":"841894d0-7f26-4642-ac09-1395082e288e","Type":"ContainerDied","Data":"ab2c18503fb25c921d35aa3cee7c5bd2834990352b917dda89b58e7b0fa8ebb4"} Jan 07 04:04:23 crc kubenswrapper[4980]: I0107 04:04:23.932986 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787" Jan 07 04:04:23 crc kubenswrapper[4980]: I0107 04:04:23.980027 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/841894d0-7f26-4642-ac09-1395082e288e-inventory\") pod \"841894d0-7f26-4642-ac09-1395082e288e\" (UID: \"841894d0-7f26-4642-ac09-1395082e288e\") " Jan 07 04:04:23 crc kubenswrapper[4980]: I0107 04:04:23.980221 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/841894d0-7f26-4642-ac09-1395082e288e-ssh-key-openstack-edpm-ipam\") pod \"841894d0-7f26-4642-ac09-1395082e288e\" (UID: \"841894d0-7f26-4642-ac09-1395082e288e\") " Jan 07 04:04:23 crc kubenswrapper[4980]: I0107 04:04:23.980310 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntjh9\" (UniqueName: \"kubernetes.io/projected/841894d0-7f26-4642-ac09-1395082e288e-kube-api-access-ntjh9\") pod \"841894d0-7f26-4642-ac09-1395082e288e\" (UID: \"841894d0-7f26-4642-ac09-1395082e288e\") " Jan 07 04:04:23 crc kubenswrapper[4980]: I0107 04:04:23.986880 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/841894d0-7f26-4642-ac09-1395082e288e-kube-api-access-ntjh9" (OuterVolumeSpecName: "kube-api-access-ntjh9") pod "841894d0-7f26-4642-ac09-1395082e288e" (UID: "841894d0-7f26-4642-ac09-1395082e288e"). InnerVolumeSpecName "kube-api-access-ntjh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.031859 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/841894d0-7f26-4642-ac09-1395082e288e-inventory" (OuterVolumeSpecName: "inventory") pod "841894d0-7f26-4642-ac09-1395082e288e" (UID: "841894d0-7f26-4642-ac09-1395082e288e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.034786 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/841894d0-7f26-4642-ac09-1395082e288e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "841894d0-7f26-4642-ac09-1395082e288e" (UID: "841894d0-7f26-4642-ac09-1395082e288e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.083840 4980 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/841894d0-7f26-4642-ac09-1395082e288e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.084193 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntjh9\" (UniqueName: \"kubernetes.io/projected/841894d0-7f26-4642-ac09-1395082e288e-kube-api-access-ntjh9\") on node \"crc\" DevicePath \"\"" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.084214 4980 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/841894d0-7f26-4642-ac09-1395082e288e-inventory\") on node \"crc\" DevicePath \"\"" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.495886 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787" event={"ID":"841894d0-7f26-4642-ac09-1395082e288e","Type":"ContainerDied","Data":"8ac4820899754b0fa58e69dbc5ec346a10e245ade5fadd67d0c8dcb9d70aabf6"} Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.495949 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ac4820899754b0fa58e69dbc5ec346a10e245ade5fadd67d0c8dcb9d70aabf6" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.496063 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tv787" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.629770 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv"] Jan 07 04:04:24 crc kubenswrapper[4980]: E0107 04:04:24.630120 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="841894d0-7f26-4642-ac09-1395082e288e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.630131 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="841894d0-7f26-4642-ac09-1395082e288e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.630303 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="841894d0-7f26-4642-ac09-1395082e288e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.630858 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.633204 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.636033 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7ttf" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.636219 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.640345 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.641805 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv"] Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.695955 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5319b8be-e13c-4d5f-92d5-41d82748a080-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv\" (UID: \"5319b8be-e13c-4d5f-92d5-41d82748a080\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.696223 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5319b8be-e13c-4d5f-92d5-41d82748a080-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv\" (UID: \"5319b8be-e13c-4d5f-92d5-41d82748a080\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.696266 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xg8p\" (UniqueName: \"kubernetes.io/projected/5319b8be-e13c-4d5f-92d5-41d82748a080-kube-api-access-7xg8p\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv\" (UID: \"5319b8be-e13c-4d5f-92d5-41d82748a080\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.797780 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5319b8be-e13c-4d5f-92d5-41d82748a080-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv\" (UID: \"5319b8be-e13c-4d5f-92d5-41d82748a080\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.797836 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xg8p\" (UniqueName: \"kubernetes.io/projected/5319b8be-e13c-4d5f-92d5-41d82748a080-kube-api-access-7xg8p\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv\" (UID: \"5319b8be-e13c-4d5f-92d5-41d82748a080\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.797994 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5319b8be-e13c-4d5f-92d5-41d82748a080-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv\" (UID: \"5319b8be-e13c-4d5f-92d5-41d82748a080\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.804108 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5319b8be-e13c-4d5f-92d5-41d82748a080-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv\" (UID: \"5319b8be-e13c-4d5f-92d5-41d82748a080\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.805509 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5319b8be-e13c-4d5f-92d5-41d82748a080-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv\" (UID: \"5319b8be-e13c-4d5f-92d5-41d82748a080\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.823703 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xg8p\" (UniqueName: \"kubernetes.io/projected/5319b8be-e13c-4d5f-92d5-41d82748a080-kube-api-access-7xg8p\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv\" (UID: \"5319b8be-e13c-4d5f-92d5-41d82748a080\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv" Jan 07 04:04:24 crc kubenswrapper[4980]: I0107 04:04:24.950226 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv" Jan 07 04:04:25 crc kubenswrapper[4980]: I0107 04:04:25.578626 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv"] Jan 07 04:04:25 crc kubenswrapper[4980]: W0107 04:04:25.584655 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5319b8be_e13c_4d5f_92d5_41d82748a080.slice/crio-9a3dadfc53ff96fd0ca2730798e223bd2a18c510243b0021b206fe1b9f41d624 WatchSource:0}: Error finding container 9a3dadfc53ff96fd0ca2730798e223bd2a18c510243b0021b206fe1b9f41d624: Status 404 returned error can't find the container with id 9a3dadfc53ff96fd0ca2730798e223bd2a18c510243b0021b206fe1b9f41d624 Jan 07 04:04:26 crc kubenswrapper[4980]: I0107 04:04:26.516056 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv" event={"ID":"5319b8be-e13c-4d5f-92d5-41d82748a080","Type":"ContainerStarted","Data":"8ba71451feebf6bc1f9b44b3a6759569ee82a5c39e5a919361f07a39a2b7e954"} Jan 07 04:04:26 crc kubenswrapper[4980]: I0107 04:04:26.516587 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv" event={"ID":"5319b8be-e13c-4d5f-92d5-41d82748a080","Type":"ContainerStarted","Data":"9a3dadfc53ff96fd0ca2730798e223bd2a18c510243b0021b206fe1b9f41d624"} Jan 07 04:04:26 crc kubenswrapper[4980]: I0107 04:04:26.544183 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv" podStartSLOduration=2.019650237 podStartE2EDuration="2.544160604s" podCreationTimestamp="2026-01-07 04:04:24 +0000 UTC" firstStartedPulling="2026-01-07 04:04:25.589465721 +0000 UTC m=+1912.155160496" lastFinishedPulling="2026-01-07 04:04:26.113976098 +0000 UTC m=+1912.679670863" observedRunningTime="2026-01-07 04:04:26.537633869 +0000 UTC m=+1913.103328644" watchObservedRunningTime="2026-01-07 04:04:26.544160604 +0000 UTC m=+1913.109855349" Jan 07 04:04:37 crc kubenswrapper[4980]: I0107 04:04:37.643156 4980 generic.go:334] "Generic (PLEG): container finished" podID="5319b8be-e13c-4d5f-92d5-41d82748a080" containerID="8ba71451feebf6bc1f9b44b3a6759569ee82a5c39e5a919361f07a39a2b7e954" exitCode=0 Jan 07 04:04:37 crc kubenswrapper[4980]: I0107 04:04:37.643289 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv" event={"ID":"5319b8be-e13c-4d5f-92d5-41d82748a080","Type":"ContainerDied","Data":"8ba71451feebf6bc1f9b44b3a6759569ee82a5c39e5a919361f07a39a2b7e954"} Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.168061 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.355772 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5319b8be-e13c-4d5f-92d5-41d82748a080-ssh-key-openstack-edpm-ipam\") pod \"5319b8be-e13c-4d5f-92d5-41d82748a080\" (UID: \"5319b8be-e13c-4d5f-92d5-41d82748a080\") " Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.356017 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5319b8be-e13c-4d5f-92d5-41d82748a080-inventory\") pod \"5319b8be-e13c-4d5f-92d5-41d82748a080\" (UID: \"5319b8be-e13c-4d5f-92d5-41d82748a080\") " Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.356141 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xg8p\" (UniqueName: \"kubernetes.io/projected/5319b8be-e13c-4d5f-92d5-41d82748a080-kube-api-access-7xg8p\") pod \"5319b8be-e13c-4d5f-92d5-41d82748a080\" (UID: \"5319b8be-e13c-4d5f-92d5-41d82748a080\") " Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.369681 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5319b8be-e13c-4d5f-92d5-41d82748a080-kube-api-access-7xg8p" (OuterVolumeSpecName: "kube-api-access-7xg8p") pod "5319b8be-e13c-4d5f-92d5-41d82748a080" (UID: "5319b8be-e13c-4d5f-92d5-41d82748a080"). InnerVolumeSpecName "kube-api-access-7xg8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.383052 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5319b8be-e13c-4d5f-92d5-41d82748a080-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5319b8be-e13c-4d5f-92d5-41d82748a080" (UID: "5319b8be-e13c-4d5f-92d5-41d82748a080"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.401523 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5319b8be-e13c-4d5f-92d5-41d82748a080-inventory" (OuterVolumeSpecName: "inventory") pod "5319b8be-e13c-4d5f-92d5-41d82748a080" (UID: "5319b8be-e13c-4d5f-92d5-41d82748a080"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.459224 4980 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5319b8be-e13c-4d5f-92d5-41d82748a080-inventory\") on node \"crc\" DevicePath \"\"" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.459279 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xg8p\" (UniqueName: \"kubernetes.io/projected/5319b8be-e13c-4d5f-92d5-41d82748a080-kube-api-access-7xg8p\") on node \"crc\" DevicePath \"\"" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.459302 4980 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5319b8be-e13c-4d5f-92d5-41d82748a080-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.669770 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv" event={"ID":"5319b8be-e13c-4d5f-92d5-41d82748a080","Type":"ContainerDied","Data":"9a3dadfc53ff96fd0ca2730798e223bd2a18c510243b0021b206fe1b9f41d624"} Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.669829 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a3dadfc53ff96fd0ca2730798e223bd2a18c510243b0021b206fe1b9f41d624" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.669914 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.812580 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b"] Jan 07 04:04:39 crc kubenswrapper[4980]: E0107 04:04:39.813516 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5319b8be-e13c-4d5f-92d5-41d82748a080" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.813653 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="5319b8be-e13c-4d5f-92d5-41d82748a080" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.813964 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="5319b8be-e13c-4d5f-92d5-41d82748a080" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.814834 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.819434 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.819639 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.819663 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.819830 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.824920 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.825471 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7ttf" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.825780 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.825922 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.831391 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b"] Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.980322 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.980388 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nrt9\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-kube-api-access-7nrt9\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.980692 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.980763 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.980938 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.980996 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.981078 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.981149 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.981195 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.981248 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.981298 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.981332 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.981384 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:39 crc kubenswrapper[4980]: I0107 04:04:39.981424 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.084032 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.084424 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.084696 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.084886 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.085078 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.085281 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.085446 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.086456 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.086745 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nrt9\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-kube-api-access-7nrt9\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.088093 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.089099 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.090445 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.090670 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.091148 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.091409 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.091621 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.092582 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.095775 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.095943 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.097083 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.097627 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.098213 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.099484 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.102170 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.102999 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.107076 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nrt9\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-kube-api-access-7nrt9\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.108340 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.115622 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rks2b\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.195375 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:04:40 crc kubenswrapper[4980]: I0107 04:04:40.793541 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b"] Jan 07 04:04:41 crc kubenswrapper[4980]: I0107 04:04:41.571753 4980 scope.go:117] "RemoveContainer" containerID="477d0f2fec7a8903c93d035b7439e0c5030debaa6613216a8ab7b07e1564298e" Jan 07 04:04:41 crc kubenswrapper[4980]: I0107 04:04:41.689021 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" event={"ID":"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4","Type":"ContainerStarted","Data":"3ee9ba77f0b2a29a4bd0aa5e4e8f01de8aa023b3e3cef540bc0a834920b782b9"} Jan 07 04:04:42 crc kubenswrapper[4980]: I0107 04:04:42.701658 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" event={"ID":"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4","Type":"ContainerStarted","Data":"00fc4a2e76b73a5225a427ec734ed8326dba50c65fb2913647aa57597743149d"} Jan 07 04:04:42 crc kubenswrapper[4980]: I0107 04:04:42.753225 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" podStartSLOduration=3.147720652 podStartE2EDuration="3.753197492s" podCreationTimestamp="2026-01-07 04:04:39 +0000 UTC" firstStartedPulling="2026-01-07 04:04:40.804595668 +0000 UTC m=+1927.370290413" lastFinishedPulling="2026-01-07 04:04:41.410072508 +0000 UTC m=+1927.975767253" observedRunningTime="2026-01-07 04:04:42.740302517 +0000 UTC m=+1929.305997292" watchObservedRunningTime="2026-01-07 04:04:42.753197492 +0000 UTC m=+1929.318892267" Jan 07 04:05:06 crc kubenswrapper[4980]: I0107 04:05:06.543155 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:05:06 crc kubenswrapper[4980]: I0107 04:05:06.543904 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:05:24 crc kubenswrapper[4980]: I0107 04:05:24.180429 4980 generic.go:334] "Generic (PLEG): container finished" podID="010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" containerID="00fc4a2e76b73a5225a427ec734ed8326dba50c65fb2913647aa57597743149d" exitCode=0 Jan 07 04:05:24 crc kubenswrapper[4980]: I0107 04:05:24.180579 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" event={"ID":"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4","Type":"ContainerDied","Data":"00fc4a2e76b73a5225a427ec734ed8326dba50c65fb2913647aa57597743149d"} Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.757695 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.812005 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-ovn-combined-ca-bundle\") pod \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.812057 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nrt9\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-kube-api-access-7nrt9\") pod \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.812121 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-bootstrap-combined-ca-bundle\") pod \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.812146 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-libvirt-combined-ca-bundle\") pod \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.812193 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-nova-combined-ca-bundle\") pod \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.812227 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-repo-setup-combined-ca-bundle\") pod \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.812274 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-neutron-metadata-combined-ca-bundle\") pod \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.812325 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-telemetry-combined-ca-bundle\") pod \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.813217 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-inventory\") pod \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.813255 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.813289 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-ssh-key-openstack-edpm-ipam\") pod \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.813318 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.813389 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.813521 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.822971 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" (UID: "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.823777 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" (UID: "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.824732 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" (UID: "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.824786 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" (UID: "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.824923 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-kube-api-access-7nrt9" (OuterVolumeSpecName: "kube-api-access-7nrt9") pod "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" (UID: "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4"). InnerVolumeSpecName "kube-api-access-7nrt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.825042 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" (UID: "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.825139 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" (UID: "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.825203 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" (UID: "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.826757 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" (UID: "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.828030 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" (UID: "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.836033 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" (UID: "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.837092 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" (UID: "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:05:25 crc kubenswrapper[4980]: E0107 04:05:25.863189 4980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-inventory podName:010cdc43-6f59-4a62-b7ae-b98c5cdec4e4 nodeName:}" failed. No retries permitted until 2026-01-07 04:05:26.363150944 +0000 UTC m=+1972.928845729 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-inventory") pod "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" (UID: "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4") : error deleting /var/lib/kubelet/pods/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4/volume-subpaths: remove /var/lib/kubelet/pods/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4/volume-subpaths: no such file or directory Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.865207 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" (UID: "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.916663 4980 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.916713 4980 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.916737 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nrt9\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-kube-api-access-7nrt9\") on node \"crc\" DevicePath \"\"" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.916756 4980 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.916776 4980 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.916795 4980 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.916813 4980 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.916855 4980 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.916877 4980 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.916897 4980 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.916918 4980 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.916938 4980 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 07 04:05:25 crc kubenswrapper[4980]: I0107 04:05:25.916959 4980 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.207870 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" event={"ID":"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4","Type":"ContainerDied","Data":"3ee9ba77f0b2a29a4bd0aa5e4e8f01de8aa023b3e3cef540bc0a834920b782b9"} Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.208140 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ee9ba77f0b2a29a4bd0aa5e4e8f01de8aa023b3e3cef540bc0a834920b782b9" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.208138 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rks2b" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.427420 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-inventory\") pod \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\" (UID: \"010cdc43-6f59-4a62-b7ae-b98c5cdec4e4\") " Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.433845 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-inventory" (OuterVolumeSpecName: "inventory") pod "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" (UID: "010cdc43-6f59-4a62-b7ae-b98c5cdec4e4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.529672 4980 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/010cdc43-6f59-4a62-b7ae-b98c5cdec4e4-inventory\") on node \"crc\" DevicePath \"\"" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.539397 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc"] Jan 07 04:05:26 crc kubenswrapper[4980]: E0107 04:05:26.540053 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.540084 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.540529 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="010cdc43-6f59-4a62-b7ae-b98c5cdec4e4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.541824 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.550327 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.550738 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.550980 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7ttf" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.551176 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.551200 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.573301 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc"] Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.631999 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ccw6\" (UniqueName: \"kubernetes.io/projected/b6c5efe0-317c-4de6-9d52-c8790db72ae6-kube-api-access-2ccw6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w9lbc\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.632135 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b6c5efe0-317c-4de6-9d52-c8790db72ae6-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w9lbc\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.632333 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c5efe0-317c-4de6-9d52-c8790db72ae6-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w9lbc\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.632445 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6c5efe0-317c-4de6-9d52-c8790db72ae6-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w9lbc\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.632640 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6c5efe0-317c-4de6-9d52-c8790db72ae6-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w9lbc\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.734104 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c5efe0-317c-4de6-9d52-c8790db72ae6-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w9lbc\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.734194 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6c5efe0-317c-4de6-9d52-c8790db72ae6-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w9lbc\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.734313 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6c5efe0-317c-4de6-9d52-c8790db72ae6-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w9lbc\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.734452 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ccw6\" (UniqueName: \"kubernetes.io/projected/b6c5efe0-317c-4de6-9d52-c8790db72ae6-kube-api-access-2ccw6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w9lbc\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.734523 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b6c5efe0-317c-4de6-9d52-c8790db72ae6-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w9lbc\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.741742 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b6c5efe0-317c-4de6-9d52-c8790db72ae6-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w9lbc\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.742464 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c5efe0-317c-4de6-9d52-c8790db72ae6-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w9lbc\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.742526 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6c5efe0-317c-4de6-9d52-c8790db72ae6-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w9lbc\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.747836 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6c5efe0-317c-4de6-9d52-c8790db72ae6-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w9lbc\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.770862 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ccw6\" (UniqueName: \"kubernetes.io/projected/b6c5efe0-317c-4de6-9d52-c8790db72ae6-kube-api-access-2ccw6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w9lbc\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:05:26 crc kubenswrapper[4980]: I0107 04:05:26.906785 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:05:27 crc kubenswrapper[4980]: I0107 04:05:27.286379 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc"] Jan 07 04:05:28 crc kubenswrapper[4980]: I0107 04:05:28.236501 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" event={"ID":"b6c5efe0-317c-4de6-9d52-c8790db72ae6","Type":"ContainerStarted","Data":"619bf6a3e31336afebf0921e814bf3dde46b38f2ee3ad1ac0e2be3453b88e9ab"} Jan 07 04:05:28 crc kubenswrapper[4980]: I0107 04:05:28.878137 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fvmhk"] Jan 07 04:05:28 crc kubenswrapper[4980]: I0107 04:05:28.881251 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvmhk" Jan 07 04:05:28 crc kubenswrapper[4980]: I0107 04:05:28.891883 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvmhk"] Jan 07 04:05:28 crc kubenswrapper[4980]: I0107 04:05:28.990341 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa0b3bd-adfc-4234-8de2-b68030471275-utilities\") pod \"redhat-marketplace-fvmhk\" (UID: \"2aa0b3bd-adfc-4234-8de2-b68030471275\") " pod="openshift-marketplace/redhat-marketplace-fvmhk" Jan 07 04:05:28 crc kubenswrapper[4980]: I0107 04:05:28.991252 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa0b3bd-adfc-4234-8de2-b68030471275-catalog-content\") pod \"redhat-marketplace-fvmhk\" (UID: \"2aa0b3bd-adfc-4234-8de2-b68030471275\") " pod="openshift-marketplace/redhat-marketplace-fvmhk" Jan 07 04:05:28 crc kubenswrapper[4980]: I0107 04:05:28.991336 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l76s8\" (UniqueName: \"kubernetes.io/projected/2aa0b3bd-adfc-4234-8de2-b68030471275-kube-api-access-l76s8\") pod \"redhat-marketplace-fvmhk\" (UID: \"2aa0b3bd-adfc-4234-8de2-b68030471275\") " pod="openshift-marketplace/redhat-marketplace-fvmhk" Jan 07 04:05:29 crc kubenswrapper[4980]: I0107 04:05:29.093094 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa0b3bd-adfc-4234-8de2-b68030471275-utilities\") pod \"redhat-marketplace-fvmhk\" (UID: \"2aa0b3bd-adfc-4234-8de2-b68030471275\") " pod="openshift-marketplace/redhat-marketplace-fvmhk" Jan 07 04:05:29 crc kubenswrapper[4980]: I0107 04:05:29.093176 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa0b3bd-adfc-4234-8de2-b68030471275-catalog-content\") pod \"redhat-marketplace-fvmhk\" (UID: \"2aa0b3bd-adfc-4234-8de2-b68030471275\") " pod="openshift-marketplace/redhat-marketplace-fvmhk" Jan 07 04:05:29 crc kubenswrapper[4980]: I0107 04:05:29.093229 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l76s8\" (UniqueName: \"kubernetes.io/projected/2aa0b3bd-adfc-4234-8de2-b68030471275-kube-api-access-l76s8\") pod \"redhat-marketplace-fvmhk\" (UID: \"2aa0b3bd-adfc-4234-8de2-b68030471275\") " pod="openshift-marketplace/redhat-marketplace-fvmhk" Jan 07 04:05:29 crc kubenswrapper[4980]: I0107 04:05:29.093825 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa0b3bd-adfc-4234-8de2-b68030471275-catalog-content\") pod \"redhat-marketplace-fvmhk\" (UID: \"2aa0b3bd-adfc-4234-8de2-b68030471275\") " pod="openshift-marketplace/redhat-marketplace-fvmhk" Jan 07 04:05:29 crc kubenswrapper[4980]: I0107 04:05:29.094044 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa0b3bd-adfc-4234-8de2-b68030471275-utilities\") pod \"redhat-marketplace-fvmhk\" (UID: \"2aa0b3bd-adfc-4234-8de2-b68030471275\") " pod="openshift-marketplace/redhat-marketplace-fvmhk" Jan 07 04:05:29 crc kubenswrapper[4980]: I0107 04:05:29.125594 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l76s8\" (UniqueName: \"kubernetes.io/projected/2aa0b3bd-adfc-4234-8de2-b68030471275-kube-api-access-l76s8\") pod \"redhat-marketplace-fvmhk\" (UID: \"2aa0b3bd-adfc-4234-8de2-b68030471275\") " pod="openshift-marketplace/redhat-marketplace-fvmhk" Jan 07 04:05:29 crc kubenswrapper[4980]: I0107 04:05:29.218134 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvmhk" Jan 07 04:05:29 crc kubenswrapper[4980]: I0107 04:05:29.246944 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" event={"ID":"b6c5efe0-317c-4de6-9d52-c8790db72ae6","Type":"ContainerStarted","Data":"1e88a36ee601dcd50c1cdbb884f80c7a07b802d09f342437d9f7c88fcfa0dcf5"} Jan 07 04:05:29 crc kubenswrapper[4980]: I0107 04:05:29.275936 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" podStartSLOduration=2.579997134 podStartE2EDuration="3.275892s" podCreationTimestamp="2026-01-07 04:05:26 +0000 UTC" firstStartedPulling="2026-01-07 04:05:27.286398517 +0000 UTC m=+1973.852093252" lastFinishedPulling="2026-01-07 04:05:27.982293383 +0000 UTC m=+1974.547988118" observedRunningTime="2026-01-07 04:05:29.267201531 +0000 UTC m=+1975.832896286" watchObservedRunningTime="2026-01-07 04:05:29.275892 +0000 UTC m=+1975.841586815" Jan 07 04:05:29 crc kubenswrapper[4980]: I0107 04:05:29.759670 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvmhk"] Jan 07 04:05:29 crc kubenswrapper[4980]: W0107 04:05:29.763605 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2aa0b3bd_adfc_4234_8de2_b68030471275.slice/crio-0e071583edfa7d72c4b3a5933e9bf2b9e2c9a3d7fc0331d6b2928d8c751d0fce WatchSource:0}: Error finding container 0e071583edfa7d72c4b3a5933e9bf2b9e2c9a3d7fc0331d6b2928d8c751d0fce: Status 404 returned error can't find the container with id 0e071583edfa7d72c4b3a5933e9bf2b9e2c9a3d7fc0331d6b2928d8c751d0fce Jan 07 04:05:30 crc kubenswrapper[4980]: I0107 04:05:30.260675 4980 generic.go:334] "Generic (PLEG): container finished" podID="2aa0b3bd-adfc-4234-8de2-b68030471275" containerID="291b6969a65bb5fc885c7574e6055a71abfa6ea133bc3f24f0c02ecaa37d6b86" exitCode=0 Jan 07 04:05:30 crc kubenswrapper[4980]: I0107 04:05:30.260806 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvmhk" event={"ID":"2aa0b3bd-adfc-4234-8de2-b68030471275","Type":"ContainerDied","Data":"291b6969a65bb5fc885c7574e6055a71abfa6ea133bc3f24f0c02ecaa37d6b86"} Jan 07 04:05:30 crc kubenswrapper[4980]: I0107 04:05:30.261170 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvmhk" event={"ID":"2aa0b3bd-adfc-4234-8de2-b68030471275","Type":"ContainerStarted","Data":"0e071583edfa7d72c4b3a5933e9bf2b9e2c9a3d7fc0331d6b2928d8c751d0fce"} Jan 07 04:05:32 crc kubenswrapper[4980]: I0107 04:05:32.291932 4980 generic.go:334] "Generic (PLEG): container finished" podID="2aa0b3bd-adfc-4234-8de2-b68030471275" containerID="92a50928d5abf1ee1177124db40b601a70809cae5df128c556580169f095ce3e" exitCode=0 Jan 07 04:05:32 crc kubenswrapper[4980]: I0107 04:05:32.292102 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvmhk" event={"ID":"2aa0b3bd-adfc-4234-8de2-b68030471275","Type":"ContainerDied","Data":"92a50928d5abf1ee1177124db40b601a70809cae5df128c556580169f095ce3e"} Jan 07 04:05:33 crc kubenswrapper[4980]: I0107 04:05:33.305589 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvmhk" event={"ID":"2aa0b3bd-adfc-4234-8de2-b68030471275","Type":"ContainerStarted","Data":"25060a972b6056fdd45e39da66cdcaba94c7adc1b090cfae635a5071fd2c4d5c"} Jan 07 04:05:33 crc kubenswrapper[4980]: I0107 04:05:33.330964 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fvmhk" podStartSLOduration=2.779320502 podStartE2EDuration="5.330937756s" podCreationTimestamp="2026-01-07 04:05:28 +0000 UTC" firstStartedPulling="2026-01-07 04:05:30.262445043 +0000 UTC m=+1976.828139788" lastFinishedPulling="2026-01-07 04:05:32.814062277 +0000 UTC m=+1979.379757042" observedRunningTime="2026-01-07 04:05:33.327278108 +0000 UTC m=+1979.892972873" watchObservedRunningTime="2026-01-07 04:05:33.330937756 +0000 UTC m=+1979.896632531" Jan 07 04:05:36 crc kubenswrapper[4980]: I0107 04:05:36.543645 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:05:36 crc kubenswrapper[4980]: I0107 04:05:36.544335 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:05:39 crc kubenswrapper[4980]: I0107 04:05:39.218307 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fvmhk" Jan 07 04:05:39 crc kubenswrapper[4980]: I0107 04:05:39.219341 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fvmhk" Jan 07 04:05:39 crc kubenswrapper[4980]: I0107 04:05:39.299422 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fvmhk" Jan 07 04:05:39 crc kubenswrapper[4980]: I0107 04:05:39.451938 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fvmhk" Jan 07 04:05:39 crc kubenswrapper[4980]: I0107 04:05:39.556238 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvmhk"] Jan 07 04:05:41 crc kubenswrapper[4980]: I0107 04:05:41.393537 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fvmhk" podUID="2aa0b3bd-adfc-4234-8de2-b68030471275" containerName="registry-server" containerID="cri-o://25060a972b6056fdd45e39da66cdcaba94c7adc1b090cfae635a5071fd2c4d5c" gracePeriod=2 Jan 07 04:05:41 crc kubenswrapper[4980]: I0107 04:05:41.975179 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvmhk" Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.101987 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa0b3bd-adfc-4234-8de2-b68030471275-utilities\") pod \"2aa0b3bd-adfc-4234-8de2-b68030471275\" (UID: \"2aa0b3bd-adfc-4234-8de2-b68030471275\") " Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.102337 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l76s8\" (UniqueName: \"kubernetes.io/projected/2aa0b3bd-adfc-4234-8de2-b68030471275-kube-api-access-l76s8\") pod \"2aa0b3bd-adfc-4234-8de2-b68030471275\" (UID: \"2aa0b3bd-adfc-4234-8de2-b68030471275\") " Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.102404 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa0b3bd-adfc-4234-8de2-b68030471275-catalog-content\") pod \"2aa0b3bd-adfc-4234-8de2-b68030471275\" (UID: \"2aa0b3bd-adfc-4234-8de2-b68030471275\") " Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.103727 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa0b3bd-adfc-4234-8de2-b68030471275-utilities" (OuterVolumeSpecName: "utilities") pod "2aa0b3bd-adfc-4234-8de2-b68030471275" (UID: "2aa0b3bd-adfc-4234-8de2-b68030471275"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.108059 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa0b3bd-adfc-4234-8de2-b68030471275-kube-api-access-l76s8" (OuterVolumeSpecName: "kube-api-access-l76s8") pod "2aa0b3bd-adfc-4234-8de2-b68030471275" (UID: "2aa0b3bd-adfc-4234-8de2-b68030471275"). InnerVolumeSpecName "kube-api-access-l76s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.204991 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l76s8\" (UniqueName: \"kubernetes.io/projected/2aa0b3bd-adfc-4234-8de2-b68030471275-kube-api-access-l76s8\") on node \"crc\" DevicePath \"\"" Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.205352 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa0b3bd-adfc-4234-8de2-b68030471275-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.407888 4980 generic.go:334] "Generic (PLEG): container finished" podID="2aa0b3bd-adfc-4234-8de2-b68030471275" containerID="25060a972b6056fdd45e39da66cdcaba94c7adc1b090cfae635a5071fd2c4d5c" exitCode=0 Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.407951 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvmhk" event={"ID":"2aa0b3bd-adfc-4234-8de2-b68030471275","Type":"ContainerDied","Data":"25060a972b6056fdd45e39da66cdcaba94c7adc1b090cfae635a5071fd2c4d5c"} Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.407988 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvmhk" Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.408045 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvmhk" event={"ID":"2aa0b3bd-adfc-4234-8de2-b68030471275","Type":"ContainerDied","Data":"0e071583edfa7d72c4b3a5933e9bf2b9e2c9a3d7fc0331d6b2928d8c751d0fce"} Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.408084 4980 scope.go:117] "RemoveContainer" containerID="25060a972b6056fdd45e39da66cdcaba94c7adc1b090cfae635a5071fd2c4d5c" Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.430380 4980 scope.go:117] "RemoveContainer" containerID="92a50928d5abf1ee1177124db40b601a70809cae5df128c556580169f095ce3e" Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.473285 4980 scope.go:117] "RemoveContainer" containerID="291b6969a65bb5fc885c7574e6055a71abfa6ea133bc3f24f0c02ecaa37d6b86" Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.510378 4980 scope.go:117] "RemoveContainer" containerID="25060a972b6056fdd45e39da66cdcaba94c7adc1b090cfae635a5071fd2c4d5c" Jan 07 04:05:42 crc kubenswrapper[4980]: E0107 04:05:42.511099 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25060a972b6056fdd45e39da66cdcaba94c7adc1b090cfae635a5071fd2c4d5c\": container with ID starting with 25060a972b6056fdd45e39da66cdcaba94c7adc1b090cfae635a5071fd2c4d5c not found: ID does not exist" containerID="25060a972b6056fdd45e39da66cdcaba94c7adc1b090cfae635a5071fd2c4d5c" Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.511188 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25060a972b6056fdd45e39da66cdcaba94c7adc1b090cfae635a5071fd2c4d5c"} err="failed to get container status \"25060a972b6056fdd45e39da66cdcaba94c7adc1b090cfae635a5071fd2c4d5c\": rpc error: code = NotFound desc = could not find container \"25060a972b6056fdd45e39da66cdcaba94c7adc1b090cfae635a5071fd2c4d5c\": container with ID starting with 25060a972b6056fdd45e39da66cdcaba94c7adc1b090cfae635a5071fd2c4d5c not found: ID does not exist" Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.511252 4980 scope.go:117] "RemoveContainer" containerID="92a50928d5abf1ee1177124db40b601a70809cae5df128c556580169f095ce3e" Jan 07 04:05:42 crc kubenswrapper[4980]: E0107 04:05:42.512008 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92a50928d5abf1ee1177124db40b601a70809cae5df128c556580169f095ce3e\": container with ID starting with 92a50928d5abf1ee1177124db40b601a70809cae5df128c556580169f095ce3e not found: ID does not exist" containerID="92a50928d5abf1ee1177124db40b601a70809cae5df128c556580169f095ce3e" Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.512220 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92a50928d5abf1ee1177124db40b601a70809cae5df128c556580169f095ce3e"} err="failed to get container status \"92a50928d5abf1ee1177124db40b601a70809cae5df128c556580169f095ce3e\": rpc error: code = NotFound desc = could not find container \"92a50928d5abf1ee1177124db40b601a70809cae5df128c556580169f095ce3e\": container with ID starting with 92a50928d5abf1ee1177124db40b601a70809cae5df128c556580169f095ce3e not found: ID does not exist" Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.512338 4980 scope.go:117] "RemoveContainer" containerID="291b6969a65bb5fc885c7574e6055a71abfa6ea133bc3f24f0c02ecaa37d6b86" Jan 07 04:05:42 crc kubenswrapper[4980]: E0107 04:05:42.513714 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"291b6969a65bb5fc885c7574e6055a71abfa6ea133bc3f24f0c02ecaa37d6b86\": container with ID starting with 291b6969a65bb5fc885c7574e6055a71abfa6ea133bc3f24f0c02ecaa37d6b86 not found: ID does not exist" containerID="291b6969a65bb5fc885c7574e6055a71abfa6ea133bc3f24f0c02ecaa37d6b86" Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.513811 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"291b6969a65bb5fc885c7574e6055a71abfa6ea133bc3f24f0c02ecaa37d6b86"} err="failed to get container status \"291b6969a65bb5fc885c7574e6055a71abfa6ea133bc3f24f0c02ecaa37d6b86\": rpc error: code = NotFound desc = could not find container \"291b6969a65bb5fc885c7574e6055a71abfa6ea133bc3f24f0c02ecaa37d6b86\": container with ID starting with 291b6969a65bb5fc885c7574e6055a71abfa6ea133bc3f24f0c02ecaa37d6b86 not found: ID does not exist" Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.575667 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa0b3bd-adfc-4234-8de2-b68030471275-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2aa0b3bd-adfc-4234-8de2-b68030471275" (UID: "2aa0b3bd-adfc-4234-8de2-b68030471275"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.614039 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa0b3bd-adfc-4234-8de2-b68030471275-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.764056 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvmhk"] Jan 07 04:05:42 crc kubenswrapper[4980]: I0107 04:05:42.780240 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvmhk"] Jan 07 04:05:43 crc kubenswrapper[4980]: I0107 04:05:43.766994 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aa0b3bd-adfc-4234-8de2-b68030471275" path="/var/lib/kubelet/pods/2aa0b3bd-adfc-4234-8de2-b68030471275/volumes" Jan 07 04:05:49 crc kubenswrapper[4980]: I0107 04:05:49.877876 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ftb8f"] Jan 07 04:05:49 crc kubenswrapper[4980]: E0107 04:05:49.879245 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa0b3bd-adfc-4234-8de2-b68030471275" containerName="extract-utilities" Jan 07 04:05:49 crc kubenswrapper[4980]: I0107 04:05:49.879268 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa0b3bd-adfc-4234-8de2-b68030471275" containerName="extract-utilities" Jan 07 04:05:49 crc kubenswrapper[4980]: E0107 04:05:49.879302 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa0b3bd-adfc-4234-8de2-b68030471275" containerName="extract-content" Jan 07 04:05:49 crc kubenswrapper[4980]: I0107 04:05:49.879315 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa0b3bd-adfc-4234-8de2-b68030471275" containerName="extract-content" Jan 07 04:05:49 crc kubenswrapper[4980]: E0107 04:05:49.879354 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa0b3bd-adfc-4234-8de2-b68030471275" containerName="registry-server" Jan 07 04:05:49 crc kubenswrapper[4980]: I0107 04:05:49.879367 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa0b3bd-adfc-4234-8de2-b68030471275" containerName="registry-server" Jan 07 04:05:49 crc kubenswrapper[4980]: I0107 04:05:49.879715 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa0b3bd-adfc-4234-8de2-b68030471275" containerName="registry-server" Jan 07 04:05:49 crc kubenswrapper[4980]: I0107 04:05:49.882030 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftb8f" Jan 07 04:05:49 crc kubenswrapper[4980]: I0107 04:05:49.889957 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ftb8f"] Jan 07 04:05:49 crc kubenswrapper[4980]: I0107 04:05:49.999395 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7215d536-dbd7-4d24-a7d5-697123da3ca4-utilities\") pod \"redhat-operators-ftb8f\" (UID: \"7215d536-dbd7-4d24-a7d5-697123da3ca4\") " pod="openshift-marketplace/redhat-operators-ftb8f" Jan 07 04:05:49 crc kubenswrapper[4980]: I0107 04:05:49.999521 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwr52\" (UniqueName: \"kubernetes.io/projected/7215d536-dbd7-4d24-a7d5-697123da3ca4-kube-api-access-pwr52\") pod \"redhat-operators-ftb8f\" (UID: \"7215d536-dbd7-4d24-a7d5-697123da3ca4\") " pod="openshift-marketplace/redhat-operators-ftb8f" Jan 07 04:05:49 crc kubenswrapper[4980]: I0107 04:05:49.999616 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7215d536-dbd7-4d24-a7d5-697123da3ca4-catalog-content\") pod \"redhat-operators-ftb8f\" (UID: \"7215d536-dbd7-4d24-a7d5-697123da3ca4\") " pod="openshift-marketplace/redhat-operators-ftb8f" Jan 07 04:05:50 crc kubenswrapper[4980]: I0107 04:05:50.102398 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwr52\" (UniqueName: \"kubernetes.io/projected/7215d536-dbd7-4d24-a7d5-697123da3ca4-kube-api-access-pwr52\") pod \"redhat-operators-ftb8f\" (UID: \"7215d536-dbd7-4d24-a7d5-697123da3ca4\") " pod="openshift-marketplace/redhat-operators-ftb8f" Jan 07 04:05:50 crc kubenswrapper[4980]: I0107 04:05:50.102640 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7215d536-dbd7-4d24-a7d5-697123da3ca4-catalog-content\") pod \"redhat-operators-ftb8f\" (UID: \"7215d536-dbd7-4d24-a7d5-697123da3ca4\") " pod="openshift-marketplace/redhat-operators-ftb8f" Jan 07 04:05:50 crc kubenswrapper[4980]: I0107 04:05:50.103700 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7215d536-dbd7-4d24-a7d5-697123da3ca4-catalog-content\") pod \"redhat-operators-ftb8f\" (UID: \"7215d536-dbd7-4d24-a7d5-697123da3ca4\") " pod="openshift-marketplace/redhat-operators-ftb8f" Jan 07 04:05:50 crc kubenswrapper[4980]: I0107 04:05:50.104381 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7215d536-dbd7-4d24-a7d5-697123da3ca4-utilities\") pod \"redhat-operators-ftb8f\" (UID: \"7215d536-dbd7-4d24-a7d5-697123da3ca4\") " pod="openshift-marketplace/redhat-operators-ftb8f" Jan 07 04:05:50 crc kubenswrapper[4980]: I0107 04:05:50.105211 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7215d536-dbd7-4d24-a7d5-697123da3ca4-utilities\") pod \"redhat-operators-ftb8f\" (UID: \"7215d536-dbd7-4d24-a7d5-697123da3ca4\") " pod="openshift-marketplace/redhat-operators-ftb8f" Jan 07 04:05:50 crc kubenswrapper[4980]: I0107 04:05:50.129605 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwr52\" (UniqueName: \"kubernetes.io/projected/7215d536-dbd7-4d24-a7d5-697123da3ca4-kube-api-access-pwr52\") pod \"redhat-operators-ftb8f\" (UID: \"7215d536-dbd7-4d24-a7d5-697123da3ca4\") " pod="openshift-marketplace/redhat-operators-ftb8f" Jan 07 04:05:50 crc kubenswrapper[4980]: I0107 04:05:50.223041 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftb8f" Jan 07 04:05:50 crc kubenswrapper[4980]: W0107 04:05:50.742518 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7215d536_dbd7_4d24_a7d5_697123da3ca4.slice/crio-8130d0d3aac6d8e8e56e8868580a852014711e4b5abcdfc4c1158983d96c3dda WatchSource:0}: Error finding container 8130d0d3aac6d8e8e56e8868580a852014711e4b5abcdfc4c1158983d96c3dda: Status 404 returned error can't find the container with id 8130d0d3aac6d8e8e56e8868580a852014711e4b5abcdfc4c1158983d96c3dda Jan 07 04:05:50 crc kubenswrapper[4980]: I0107 04:05:50.743522 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ftb8f"] Jan 07 04:05:51 crc kubenswrapper[4980]: I0107 04:05:51.506650 4980 generic.go:334] "Generic (PLEG): container finished" podID="7215d536-dbd7-4d24-a7d5-697123da3ca4" containerID="aa7d7fe53c5af463b5198d4822ca0a9807f85fda1771735b7aa12f3625eb6b03" exitCode=0 Jan 07 04:05:51 crc kubenswrapper[4980]: I0107 04:05:51.506791 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftb8f" event={"ID":"7215d536-dbd7-4d24-a7d5-697123da3ca4","Type":"ContainerDied","Data":"aa7d7fe53c5af463b5198d4822ca0a9807f85fda1771735b7aa12f3625eb6b03"} Jan 07 04:05:51 crc kubenswrapper[4980]: I0107 04:05:51.507045 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftb8f" event={"ID":"7215d536-dbd7-4d24-a7d5-697123da3ca4","Type":"ContainerStarted","Data":"8130d0d3aac6d8e8e56e8868580a852014711e4b5abcdfc4c1158983d96c3dda"} Jan 07 04:05:53 crc kubenswrapper[4980]: I0107 04:05:53.539806 4980 generic.go:334] "Generic (PLEG): container finished" podID="7215d536-dbd7-4d24-a7d5-697123da3ca4" containerID="7e28d37580b131ec2c08e2b8ab309a644acbcba90417f4633fedb1e0411acf11" exitCode=0 Jan 07 04:05:53 crc kubenswrapper[4980]: I0107 04:05:53.539906 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftb8f" event={"ID":"7215d536-dbd7-4d24-a7d5-697123da3ca4","Type":"ContainerDied","Data":"7e28d37580b131ec2c08e2b8ab309a644acbcba90417f4633fedb1e0411acf11"} Jan 07 04:05:54 crc kubenswrapper[4980]: I0107 04:05:54.583194 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftb8f" event={"ID":"7215d536-dbd7-4d24-a7d5-697123da3ca4","Type":"ContainerStarted","Data":"376699393f749ced2ead29b9db8ee66716d0ec806f024c690f81673f1c847a9f"} Jan 07 04:05:54 crc kubenswrapper[4980]: I0107 04:05:54.615135 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ftb8f" podStartSLOduration=3.14930854 podStartE2EDuration="5.615114005s" podCreationTimestamp="2026-01-07 04:05:49 +0000 UTC" firstStartedPulling="2026-01-07 04:05:51.508918537 +0000 UTC m=+1998.074613302" lastFinishedPulling="2026-01-07 04:05:53.974723992 +0000 UTC m=+2000.540418767" observedRunningTime="2026-01-07 04:05:54.606218729 +0000 UTC m=+2001.171913464" watchObservedRunningTime="2026-01-07 04:05:54.615114005 +0000 UTC m=+2001.180808740" Jan 07 04:06:00 crc kubenswrapper[4980]: I0107 04:06:00.223335 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ftb8f" Jan 07 04:06:00 crc kubenswrapper[4980]: I0107 04:06:00.223998 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ftb8f" Jan 07 04:06:01 crc kubenswrapper[4980]: I0107 04:06:01.292715 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ftb8f" podUID="7215d536-dbd7-4d24-a7d5-697123da3ca4" containerName="registry-server" probeResult="failure" output=< Jan 07 04:06:01 crc kubenswrapper[4980]: timeout: failed to connect service ":50051" within 1s Jan 07 04:06:01 crc kubenswrapper[4980]: > Jan 07 04:06:06 crc kubenswrapper[4980]: I0107 04:06:06.543368 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:06:06 crc kubenswrapper[4980]: I0107 04:06:06.544234 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:06:06 crc kubenswrapper[4980]: I0107 04:06:06.544309 4980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 04:06:06 crc kubenswrapper[4980]: I0107 04:06:06.545323 4980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1ed879a83db37a20bc52acd9903f3aabf2339db3829b25c95e9a6e9ed19722c"} pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 04:06:06 crc kubenswrapper[4980]: I0107 04:06:06.545427 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" containerID="cri-o://d1ed879a83db37a20bc52acd9903f3aabf2339db3829b25c95e9a6e9ed19722c" gracePeriod=600 Jan 07 04:06:07 crc kubenswrapper[4980]: I0107 04:06:07.740836 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerID="d1ed879a83db37a20bc52acd9903f3aabf2339db3829b25c95e9a6e9ed19722c" exitCode=0 Jan 07 04:06:07 crc kubenswrapper[4980]: I0107 04:06:07.750622 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerDied","Data":"d1ed879a83db37a20bc52acd9903f3aabf2339db3829b25c95e9a6e9ed19722c"} Jan 07 04:06:07 crc kubenswrapper[4980]: I0107 04:06:07.750717 4980 scope.go:117] "RemoveContainer" containerID="890346cc64acc025756336c595763f8f87abf93347eca9d69e39350478cbc862" Jan 07 04:06:08 crc kubenswrapper[4980]: I0107 04:06:08.756330 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e"} Jan 07 04:06:10 crc kubenswrapper[4980]: I0107 04:06:10.288006 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ftb8f" Jan 07 04:06:10 crc kubenswrapper[4980]: I0107 04:06:10.379783 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ftb8f" Jan 07 04:06:10 crc kubenswrapper[4980]: I0107 04:06:10.554727 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ftb8f"] Jan 07 04:06:11 crc kubenswrapper[4980]: I0107 04:06:11.785317 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ftb8f" podUID="7215d536-dbd7-4d24-a7d5-697123da3ca4" containerName="registry-server" containerID="cri-o://376699393f749ced2ead29b9db8ee66716d0ec806f024c690f81673f1c847a9f" gracePeriod=2 Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.303071 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftb8f" Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.406841 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7215d536-dbd7-4d24-a7d5-697123da3ca4-utilities\") pod \"7215d536-dbd7-4d24-a7d5-697123da3ca4\" (UID: \"7215d536-dbd7-4d24-a7d5-697123da3ca4\") " Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.407061 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwr52\" (UniqueName: \"kubernetes.io/projected/7215d536-dbd7-4d24-a7d5-697123da3ca4-kube-api-access-pwr52\") pod \"7215d536-dbd7-4d24-a7d5-697123da3ca4\" (UID: \"7215d536-dbd7-4d24-a7d5-697123da3ca4\") " Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.407108 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7215d536-dbd7-4d24-a7d5-697123da3ca4-catalog-content\") pod \"7215d536-dbd7-4d24-a7d5-697123da3ca4\" (UID: \"7215d536-dbd7-4d24-a7d5-697123da3ca4\") " Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.407873 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7215d536-dbd7-4d24-a7d5-697123da3ca4-utilities" (OuterVolumeSpecName: "utilities") pod "7215d536-dbd7-4d24-a7d5-697123da3ca4" (UID: "7215d536-dbd7-4d24-a7d5-697123da3ca4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.408035 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7215d536-dbd7-4d24-a7d5-697123da3ca4-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.412579 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7215d536-dbd7-4d24-a7d5-697123da3ca4-kube-api-access-pwr52" (OuterVolumeSpecName: "kube-api-access-pwr52") pod "7215d536-dbd7-4d24-a7d5-697123da3ca4" (UID: "7215d536-dbd7-4d24-a7d5-697123da3ca4"). InnerVolumeSpecName "kube-api-access-pwr52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.509537 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwr52\" (UniqueName: \"kubernetes.io/projected/7215d536-dbd7-4d24-a7d5-697123da3ca4-kube-api-access-pwr52\") on node \"crc\" DevicePath \"\"" Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.540578 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7215d536-dbd7-4d24-a7d5-697123da3ca4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7215d536-dbd7-4d24-a7d5-697123da3ca4" (UID: "7215d536-dbd7-4d24-a7d5-697123da3ca4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.611689 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7215d536-dbd7-4d24-a7d5-697123da3ca4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.797426 4980 generic.go:334] "Generic (PLEG): container finished" podID="7215d536-dbd7-4d24-a7d5-697123da3ca4" containerID="376699393f749ced2ead29b9db8ee66716d0ec806f024c690f81673f1c847a9f" exitCode=0 Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.797480 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftb8f" event={"ID":"7215d536-dbd7-4d24-a7d5-697123da3ca4","Type":"ContainerDied","Data":"376699393f749ced2ead29b9db8ee66716d0ec806f024c690f81673f1c847a9f"} Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.797522 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftb8f" event={"ID":"7215d536-dbd7-4d24-a7d5-697123da3ca4","Type":"ContainerDied","Data":"8130d0d3aac6d8e8e56e8868580a852014711e4b5abcdfc4c1158983d96c3dda"} Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.797550 4980 scope.go:117] "RemoveContainer" containerID="376699393f749ced2ead29b9db8ee66716d0ec806f024c690f81673f1c847a9f" Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.797543 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftb8f" Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.839863 4980 scope.go:117] "RemoveContainer" containerID="7e28d37580b131ec2c08e2b8ab309a644acbcba90417f4633fedb1e0411acf11" Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.877050 4980 scope.go:117] "RemoveContainer" containerID="aa7d7fe53c5af463b5198d4822ca0a9807f85fda1771735b7aa12f3625eb6b03" Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.884842 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ftb8f"] Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.895302 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ftb8f"] Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.922591 4980 scope.go:117] "RemoveContainer" containerID="376699393f749ced2ead29b9db8ee66716d0ec806f024c690f81673f1c847a9f" Jan 07 04:06:12 crc kubenswrapper[4980]: E0107 04:06:12.923292 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"376699393f749ced2ead29b9db8ee66716d0ec806f024c690f81673f1c847a9f\": container with ID starting with 376699393f749ced2ead29b9db8ee66716d0ec806f024c690f81673f1c847a9f not found: ID does not exist" containerID="376699393f749ced2ead29b9db8ee66716d0ec806f024c690f81673f1c847a9f" Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.923337 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"376699393f749ced2ead29b9db8ee66716d0ec806f024c690f81673f1c847a9f"} err="failed to get container status \"376699393f749ced2ead29b9db8ee66716d0ec806f024c690f81673f1c847a9f\": rpc error: code = NotFound desc = could not find container \"376699393f749ced2ead29b9db8ee66716d0ec806f024c690f81673f1c847a9f\": container with ID starting with 376699393f749ced2ead29b9db8ee66716d0ec806f024c690f81673f1c847a9f not found: ID does not exist" Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.923362 4980 scope.go:117] "RemoveContainer" containerID="7e28d37580b131ec2c08e2b8ab309a644acbcba90417f4633fedb1e0411acf11" Jan 07 04:06:12 crc kubenswrapper[4980]: E0107 04:06:12.923718 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e28d37580b131ec2c08e2b8ab309a644acbcba90417f4633fedb1e0411acf11\": container with ID starting with 7e28d37580b131ec2c08e2b8ab309a644acbcba90417f4633fedb1e0411acf11 not found: ID does not exist" containerID="7e28d37580b131ec2c08e2b8ab309a644acbcba90417f4633fedb1e0411acf11" Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.923761 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e28d37580b131ec2c08e2b8ab309a644acbcba90417f4633fedb1e0411acf11"} err="failed to get container status \"7e28d37580b131ec2c08e2b8ab309a644acbcba90417f4633fedb1e0411acf11\": rpc error: code = NotFound desc = could not find container \"7e28d37580b131ec2c08e2b8ab309a644acbcba90417f4633fedb1e0411acf11\": container with ID starting with 7e28d37580b131ec2c08e2b8ab309a644acbcba90417f4633fedb1e0411acf11 not found: ID does not exist" Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.923786 4980 scope.go:117] "RemoveContainer" containerID="aa7d7fe53c5af463b5198d4822ca0a9807f85fda1771735b7aa12f3625eb6b03" Jan 07 04:06:12 crc kubenswrapper[4980]: E0107 04:06:12.924063 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa7d7fe53c5af463b5198d4822ca0a9807f85fda1771735b7aa12f3625eb6b03\": container with ID starting with aa7d7fe53c5af463b5198d4822ca0a9807f85fda1771735b7aa12f3625eb6b03 not found: ID does not exist" containerID="aa7d7fe53c5af463b5198d4822ca0a9807f85fda1771735b7aa12f3625eb6b03" Jan 07 04:06:12 crc kubenswrapper[4980]: I0107 04:06:12.924087 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa7d7fe53c5af463b5198d4822ca0a9807f85fda1771735b7aa12f3625eb6b03"} err="failed to get container status \"aa7d7fe53c5af463b5198d4822ca0a9807f85fda1771735b7aa12f3625eb6b03\": rpc error: code = NotFound desc = could not find container \"aa7d7fe53c5af463b5198d4822ca0a9807f85fda1771735b7aa12f3625eb6b03\": container with ID starting with aa7d7fe53c5af463b5198d4822ca0a9807f85fda1771735b7aa12f3625eb6b03 not found: ID does not exist" Jan 07 04:06:13 crc kubenswrapper[4980]: I0107 04:06:13.757699 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7215d536-dbd7-4d24-a7d5-697123da3ca4" path="/var/lib/kubelet/pods/7215d536-dbd7-4d24-a7d5-697123da3ca4/volumes" Jan 07 04:06:43 crc kubenswrapper[4980]: I0107 04:06:43.128030 4980 generic.go:334] "Generic (PLEG): container finished" podID="b6c5efe0-317c-4de6-9d52-c8790db72ae6" containerID="1e88a36ee601dcd50c1cdbb884f80c7a07b802d09f342437d9f7c88fcfa0dcf5" exitCode=0 Jan 07 04:06:43 crc kubenswrapper[4980]: I0107 04:06:43.128128 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" event={"ID":"b6c5efe0-317c-4de6-9d52-c8790db72ae6","Type":"ContainerDied","Data":"1e88a36ee601dcd50c1cdbb884f80c7a07b802d09f342437d9f7c88fcfa0dcf5"} Jan 07 04:06:44 crc kubenswrapper[4980]: I0107 04:06:44.648064 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:06:44 crc kubenswrapper[4980]: I0107 04:06:44.649100 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c5efe0-317c-4de6-9d52-c8790db72ae6-ovn-combined-ca-bundle\") pod \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " Jan 07 04:06:44 crc kubenswrapper[4980]: I0107 04:06:44.649155 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b6c5efe0-317c-4de6-9d52-c8790db72ae6-ovncontroller-config-0\") pod \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " Jan 07 04:06:44 crc kubenswrapper[4980]: I0107 04:06:44.649188 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6c5efe0-317c-4de6-9d52-c8790db72ae6-ssh-key-openstack-edpm-ipam\") pod \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " Jan 07 04:06:44 crc kubenswrapper[4980]: I0107 04:06:44.649255 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6c5efe0-317c-4de6-9d52-c8790db72ae6-inventory\") pod \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " Jan 07 04:06:44 crc kubenswrapper[4980]: I0107 04:06:44.649346 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ccw6\" (UniqueName: \"kubernetes.io/projected/b6c5efe0-317c-4de6-9d52-c8790db72ae6-kube-api-access-2ccw6\") pod \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\" (UID: \"b6c5efe0-317c-4de6-9d52-c8790db72ae6\") " Jan 07 04:06:44 crc kubenswrapper[4980]: I0107 04:06:44.654304 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c5efe0-317c-4de6-9d52-c8790db72ae6-kube-api-access-2ccw6" (OuterVolumeSpecName: "kube-api-access-2ccw6") pod "b6c5efe0-317c-4de6-9d52-c8790db72ae6" (UID: "b6c5efe0-317c-4de6-9d52-c8790db72ae6"). InnerVolumeSpecName "kube-api-access-2ccw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:06:44 crc kubenswrapper[4980]: I0107 04:06:44.661849 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c5efe0-317c-4de6-9d52-c8790db72ae6-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b6c5efe0-317c-4de6-9d52-c8790db72ae6" (UID: "b6c5efe0-317c-4de6-9d52-c8790db72ae6"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:06:44 crc kubenswrapper[4980]: I0107 04:06:44.677729 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c5efe0-317c-4de6-9d52-c8790db72ae6-inventory" (OuterVolumeSpecName: "inventory") pod "b6c5efe0-317c-4de6-9d52-c8790db72ae6" (UID: "b6c5efe0-317c-4de6-9d52-c8790db72ae6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:06:44 crc kubenswrapper[4980]: I0107 04:06:44.694773 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c5efe0-317c-4de6-9d52-c8790db72ae6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b6c5efe0-317c-4de6-9d52-c8790db72ae6" (UID: "b6c5efe0-317c-4de6-9d52-c8790db72ae6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:06:44 crc kubenswrapper[4980]: I0107 04:06:44.697911 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c5efe0-317c-4de6-9d52-c8790db72ae6-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "b6c5efe0-317c-4de6-9d52-c8790db72ae6" (UID: "b6c5efe0-317c-4de6-9d52-c8790db72ae6"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 04:06:44 crc kubenswrapper[4980]: I0107 04:06:44.752652 4980 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6c5efe0-317c-4de6-9d52-c8790db72ae6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 07 04:06:44 crc kubenswrapper[4980]: I0107 04:06:44.753031 4980 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6c5efe0-317c-4de6-9d52-c8790db72ae6-inventory\") on node \"crc\" DevicePath \"\"" Jan 07 04:06:44 crc kubenswrapper[4980]: I0107 04:06:44.753071 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ccw6\" (UniqueName: \"kubernetes.io/projected/b6c5efe0-317c-4de6-9d52-c8790db72ae6-kube-api-access-2ccw6\") on node \"crc\" DevicePath \"\"" Jan 07 04:06:44 crc kubenswrapper[4980]: I0107 04:06:44.753088 4980 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c5efe0-317c-4de6-9d52-c8790db72ae6-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 04:06:44 crc kubenswrapper[4980]: I0107 04:06:44.753107 4980 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b6c5efe0-317c-4de6-9d52-c8790db72ae6-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.151530 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" event={"ID":"b6c5efe0-317c-4de6-9d52-c8790db72ae6","Type":"ContainerDied","Data":"619bf6a3e31336afebf0921e814bf3dde46b38f2ee3ad1ac0e2be3453b88e9ab"} Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.151591 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="619bf6a3e31336afebf0921e814bf3dde46b38f2ee3ad1ac0e2be3453b88e9ab" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.151654 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w9lbc" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.257027 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl"] Jan 07 04:06:45 crc kubenswrapper[4980]: E0107 04:06:45.257873 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7215d536-dbd7-4d24-a7d5-697123da3ca4" containerName="extract-utilities" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.257895 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="7215d536-dbd7-4d24-a7d5-697123da3ca4" containerName="extract-utilities" Jan 07 04:06:45 crc kubenswrapper[4980]: E0107 04:06:45.257906 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7215d536-dbd7-4d24-a7d5-697123da3ca4" containerName="registry-server" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.257913 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="7215d536-dbd7-4d24-a7d5-697123da3ca4" containerName="registry-server" Jan 07 04:06:45 crc kubenswrapper[4980]: E0107 04:06:45.257934 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c5efe0-317c-4de6-9d52-c8790db72ae6" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.257942 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c5efe0-317c-4de6-9d52-c8790db72ae6" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 07 04:06:45 crc kubenswrapper[4980]: E0107 04:06:45.257960 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7215d536-dbd7-4d24-a7d5-697123da3ca4" containerName="extract-content" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.257967 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="7215d536-dbd7-4d24-a7d5-697123da3ca4" containerName="extract-content" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.258164 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="7215d536-dbd7-4d24-a7d5-697123da3ca4" containerName="registry-server" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.258179 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c5efe0-317c-4de6-9d52-c8790db72ae6" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.258850 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.261478 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.261876 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.262134 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7ttf" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.262362 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.262818 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.263062 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.264842 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.264982 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.265129 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.265307 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.265384 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.265530 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f8ns\" (UniqueName: \"kubernetes.io/projected/f588fdb7-1285-44cd-bf64-9b1681863e15-kube-api-access-2f8ns\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.269985 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl"] Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.366199 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f8ns\" (UniqueName: \"kubernetes.io/projected/f588fdb7-1285-44cd-bf64-9b1681863e15-kube-api-access-2f8ns\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.366280 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.366314 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.366361 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.366427 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.366472 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.370903 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.371338 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.372410 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.375363 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.376681 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.384406 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f8ns\" (UniqueName: \"kubernetes.io/projected/f588fdb7-1285-44cd-bf64-9b1681863e15-kube-api-access-2f8ns\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:45 crc kubenswrapper[4980]: I0107 04:06:45.590811 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:06:46 crc kubenswrapper[4980]: I0107 04:06:46.185013 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl"] Jan 07 04:06:46 crc kubenswrapper[4980]: W0107 04:06:46.189157 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf588fdb7_1285_44cd_bf64_9b1681863e15.slice/crio-1bbfa0db535fc4d800d852d7a8809c9075fd9820763f0d466da983a7f8d07b3a WatchSource:0}: Error finding container 1bbfa0db535fc4d800d852d7a8809c9075fd9820763f0d466da983a7f8d07b3a: Status 404 returned error can't find the container with id 1bbfa0db535fc4d800d852d7a8809c9075fd9820763f0d466da983a7f8d07b3a Jan 07 04:06:47 crc kubenswrapper[4980]: I0107 04:06:47.174663 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" event={"ID":"f588fdb7-1285-44cd-bf64-9b1681863e15","Type":"ContainerStarted","Data":"2bc7d6048e35b38269eb0ebe86f26f539576b65a888dfea74b13a6cfb8b8da20"} Jan 07 04:06:47 crc kubenswrapper[4980]: I0107 04:06:47.175321 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" event={"ID":"f588fdb7-1285-44cd-bf64-9b1681863e15","Type":"ContainerStarted","Data":"1bbfa0db535fc4d800d852d7a8809c9075fd9820763f0d466da983a7f8d07b3a"} Jan 07 04:06:47 crc kubenswrapper[4980]: I0107 04:06:47.231288 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" podStartSLOduration=1.595232132 podStartE2EDuration="2.231266883s" podCreationTimestamp="2026-01-07 04:06:45 +0000 UTC" firstStartedPulling="2026-01-07 04:06:46.192226156 +0000 UTC m=+2052.757920891" lastFinishedPulling="2026-01-07 04:06:46.828260867 +0000 UTC m=+2053.393955642" observedRunningTime="2026-01-07 04:06:47.211149084 +0000 UTC m=+2053.776843879" watchObservedRunningTime="2026-01-07 04:06:47.231266883 +0000 UTC m=+2053.796961628" Jan 07 04:07:45 crc kubenswrapper[4980]: I0107 04:07:45.832715 4980 generic.go:334] "Generic (PLEG): container finished" podID="f588fdb7-1285-44cd-bf64-9b1681863e15" containerID="2bc7d6048e35b38269eb0ebe86f26f539576b65a888dfea74b13a6cfb8b8da20" exitCode=0 Jan 07 04:07:45 crc kubenswrapper[4980]: I0107 04:07:45.832809 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" event={"ID":"f588fdb7-1285-44cd-bf64-9b1681863e15","Type":"ContainerDied","Data":"2bc7d6048e35b38269eb0ebe86f26f539576b65a888dfea74b13a6cfb8b8da20"} Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.333903 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.456476 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-neutron-metadata-combined-ca-bundle\") pod \"f588fdb7-1285-44cd-bf64-9b1681863e15\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.456789 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-ssh-key-openstack-edpm-ipam\") pod \"f588fdb7-1285-44cd-bf64-9b1681863e15\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.456903 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-inventory\") pod \"f588fdb7-1285-44cd-bf64-9b1681863e15\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.456997 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-neutron-ovn-metadata-agent-neutron-config-0\") pod \"f588fdb7-1285-44cd-bf64-9b1681863e15\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.457101 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f8ns\" (UniqueName: \"kubernetes.io/projected/f588fdb7-1285-44cd-bf64-9b1681863e15-kube-api-access-2f8ns\") pod \"f588fdb7-1285-44cd-bf64-9b1681863e15\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.457273 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-nova-metadata-neutron-config-0\") pod \"f588fdb7-1285-44cd-bf64-9b1681863e15\" (UID: \"f588fdb7-1285-44cd-bf64-9b1681863e15\") " Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.469014 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f588fdb7-1285-44cd-bf64-9b1681863e15-kube-api-access-2f8ns" (OuterVolumeSpecName: "kube-api-access-2f8ns") pod "f588fdb7-1285-44cd-bf64-9b1681863e15" (UID: "f588fdb7-1285-44cd-bf64-9b1681863e15"). InnerVolumeSpecName "kube-api-access-2f8ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.474185 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f588fdb7-1285-44cd-bf64-9b1681863e15" (UID: "f588fdb7-1285-44cd-bf64-9b1681863e15"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.482984 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-inventory" (OuterVolumeSpecName: "inventory") pod "f588fdb7-1285-44cd-bf64-9b1681863e15" (UID: "f588fdb7-1285-44cd-bf64-9b1681863e15"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.483925 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "f588fdb7-1285-44cd-bf64-9b1681863e15" (UID: "f588fdb7-1285-44cd-bf64-9b1681863e15"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.497230 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f588fdb7-1285-44cd-bf64-9b1681863e15" (UID: "f588fdb7-1285-44cd-bf64-9b1681863e15"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.505983 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "f588fdb7-1285-44cd-bf64-9b1681863e15" (UID: "f588fdb7-1285-44cd-bf64-9b1681863e15"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.559709 4980 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.559748 4980 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.559764 4980 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.559778 4980 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-inventory\") on node \"crc\" DevicePath \"\"" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.559791 4980 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f588fdb7-1285-44cd-bf64-9b1681863e15-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.559804 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f8ns\" (UniqueName: \"kubernetes.io/projected/f588fdb7-1285-44cd-bf64-9b1681863e15-kube-api-access-2f8ns\") on node \"crc\" DevicePath \"\"" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.857201 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" event={"ID":"f588fdb7-1285-44cd-bf64-9b1681863e15","Type":"ContainerDied","Data":"1bbfa0db535fc4d800d852d7a8809c9075fd9820763f0d466da983a7f8d07b3a"} Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.857678 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bbfa0db535fc4d800d852d7a8809c9075fd9820763f0d466da983a7f8d07b3a" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.857261 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.970449 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg"] Jan 07 04:07:47 crc kubenswrapper[4980]: E0107 04:07:47.971132 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f588fdb7-1285-44cd-bf64-9b1681863e15" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.971167 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="f588fdb7-1285-44cd-bf64-9b1681863e15" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.971529 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="f588fdb7-1285-44cd-bf64-9b1681863e15" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.972613 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.974526 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.975172 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7ttf" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.975491 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.975531 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.983163 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 07 04:07:47 crc kubenswrapper[4980]: I0107 04:07:47.983808 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg"] Jan 07 04:07:48 crc kubenswrapper[4980]: I0107 04:07:48.074245 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lrxj\" (UniqueName: \"kubernetes.io/projected/952aa7ac-68e0-4f49-bd80-407e2181fa05-kube-api-access-2lrxj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bngfg\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:07:48 crc kubenswrapper[4980]: I0107 04:07:48.074383 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bngfg\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:07:48 crc kubenswrapper[4980]: I0107 04:07:48.074438 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bngfg\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:07:48 crc kubenswrapper[4980]: I0107 04:07:48.074471 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bngfg\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:07:48 crc kubenswrapper[4980]: I0107 04:07:48.074610 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bngfg\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:07:48 crc kubenswrapper[4980]: I0107 04:07:48.176610 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bngfg\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:07:48 crc kubenswrapper[4980]: I0107 04:07:48.176779 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lrxj\" (UniqueName: \"kubernetes.io/projected/952aa7ac-68e0-4f49-bd80-407e2181fa05-kube-api-access-2lrxj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bngfg\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:07:48 crc kubenswrapper[4980]: I0107 04:07:48.176843 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bngfg\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:07:48 crc kubenswrapper[4980]: I0107 04:07:48.176874 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bngfg\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:07:48 crc kubenswrapper[4980]: I0107 04:07:48.176894 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bngfg\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:07:48 crc kubenswrapper[4980]: I0107 04:07:48.183469 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bngfg\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:07:48 crc kubenswrapper[4980]: I0107 04:07:48.187808 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bngfg\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:07:48 crc kubenswrapper[4980]: I0107 04:07:48.188633 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bngfg\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:07:48 crc kubenswrapper[4980]: I0107 04:07:48.190130 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bngfg\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:07:48 crc kubenswrapper[4980]: I0107 04:07:48.214417 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lrxj\" (UniqueName: \"kubernetes.io/projected/952aa7ac-68e0-4f49-bd80-407e2181fa05-kube-api-access-2lrxj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bngfg\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:07:48 crc kubenswrapper[4980]: I0107 04:07:48.297070 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:07:48 crc kubenswrapper[4980]: I0107 04:07:48.679348 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg"] Jan 07 04:07:48 crc kubenswrapper[4980]: I0107 04:07:48.868067 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" event={"ID":"952aa7ac-68e0-4f49-bd80-407e2181fa05","Type":"ContainerStarted","Data":"65d680fea4e3c721f67695e18b75e1f55aa996ae6d553f9460943515512cf407"} Jan 07 04:07:49 crc kubenswrapper[4980]: I0107 04:07:49.882447 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" event={"ID":"952aa7ac-68e0-4f49-bd80-407e2181fa05","Type":"ContainerStarted","Data":"cd1c7e9ac20ffeaa449ecbdd3094f0597708f8172e023322227a8d208b523f8a"} Jan 07 04:08:36 crc kubenswrapper[4980]: I0107 04:08:36.543826 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:08:36 crc kubenswrapper[4980]: I0107 04:08:36.544521 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:08:37 crc kubenswrapper[4980]: I0107 04:08:37.871122 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" podStartSLOduration=50.385255727 podStartE2EDuration="50.871094488s" podCreationTimestamp="2026-01-07 04:07:47 +0000 UTC" firstStartedPulling="2026-01-07 04:07:48.689209116 +0000 UTC m=+2115.254903891" lastFinishedPulling="2026-01-07 04:07:49.175047877 +0000 UTC m=+2115.740742652" observedRunningTime="2026-01-07 04:07:49.912456465 +0000 UTC m=+2116.478151210" watchObservedRunningTime="2026-01-07 04:08:37.871094488 +0000 UTC m=+2164.436789233" Jan 07 04:08:37 crc kubenswrapper[4980]: I0107 04:08:37.873987 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6zt9n"] Jan 07 04:08:37 crc kubenswrapper[4980]: I0107 04:08:37.878282 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6zt9n" Jan 07 04:08:37 crc kubenswrapper[4980]: I0107 04:08:37.885426 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6zt9n"] Jan 07 04:08:37 crc kubenswrapper[4980]: I0107 04:08:37.888600 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psch9\" (UniqueName: \"kubernetes.io/projected/decf7277-5037-4a9b-a846-9180ef132d40-kube-api-access-psch9\") pod \"certified-operators-6zt9n\" (UID: \"decf7277-5037-4a9b-a846-9180ef132d40\") " pod="openshift-marketplace/certified-operators-6zt9n" Jan 07 04:08:37 crc kubenswrapper[4980]: I0107 04:08:37.888704 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decf7277-5037-4a9b-a846-9180ef132d40-catalog-content\") pod \"certified-operators-6zt9n\" (UID: \"decf7277-5037-4a9b-a846-9180ef132d40\") " pod="openshift-marketplace/certified-operators-6zt9n" Jan 07 04:08:37 crc kubenswrapper[4980]: I0107 04:08:37.888869 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decf7277-5037-4a9b-a846-9180ef132d40-utilities\") pod \"certified-operators-6zt9n\" (UID: \"decf7277-5037-4a9b-a846-9180ef132d40\") " pod="openshift-marketplace/certified-operators-6zt9n" Jan 07 04:08:37 crc kubenswrapper[4980]: I0107 04:08:37.990877 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psch9\" (UniqueName: \"kubernetes.io/projected/decf7277-5037-4a9b-a846-9180ef132d40-kube-api-access-psch9\") pod \"certified-operators-6zt9n\" (UID: \"decf7277-5037-4a9b-a846-9180ef132d40\") " pod="openshift-marketplace/certified-operators-6zt9n" Jan 07 04:08:37 crc kubenswrapper[4980]: I0107 04:08:37.990996 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decf7277-5037-4a9b-a846-9180ef132d40-catalog-content\") pod \"certified-operators-6zt9n\" (UID: \"decf7277-5037-4a9b-a846-9180ef132d40\") " pod="openshift-marketplace/certified-operators-6zt9n" Jan 07 04:08:37 crc kubenswrapper[4980]: I0107 04:08:37.991125 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decf7277-5037-4a9b-a846-9180ef132d40-utilities\") pod \"certified-operators-6zt9n\" (UID: \"decf7277-5037-4a9b-a846-9180ef132d40\") " pod="openshift-marketplace/certified-operators-6zt9n" Jan 07 04:08:37 crc kubenswrapper[4980]: I0107 04:08:37.991864 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decf7277-5037-4a9b-a846-9180ef132d40-utilities\") pod \"certified-operators-6zt9n\" (UID: \"decf7277-5037-4a9b-a846-9180ef132d40\") " pod="openshift-marketplace/certified-operators-6zt9n" Jan 07 04:08:37 crc kubenswrapper[4980]: I0107 04:08:37.991866 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decf7277-5037-4a9b-a846-9180ef132d40-catalog-content\") pod \"certified-operators-6zt9n\" (UID: \"decf7277-5037-4a9b-a846-9180ef132d40\") " pod="openshift-marketplace/certified-operators-6zt9n" Jan 07 04:08:38 crc kubenswrapper[4980]: I0107 04:08:38.020613 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psch9\" (UniqueName: \"kubernetes.io/projected/decf7277-5037-4a9b-a846-9180ef132d40-kube-api-access-psch9\") pod \"certified-operators-6zt9n\" (UID: \"decf7277-5037-4a9b-a846-9180ef132d40\") " pod="openshift-marketplace/certified-operators-6zt9n" Jan 07 04:08:38 crc kubenswrapper[4980]: I0107 04:08:38.204376 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6zt9n" Jan 07 04:08:38 crc kubenswrapper[4980]: I0107 04:08:38.672923 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6zt9n"] Jan 07 04:08:38 crc kubenswrapper[4980]: W0107 04:08:38.674107 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddecf7277_5037_4a9b_a846_9180ef132d40.slice/crio-74157624485feefeed8b92d510ca499133371e207238cbeec7667c3d739174be WatchSource:0}: Error finding container 74157624485feefeed8b92d510ca499133371e207238cbeec7667c3d739174be: Status 404 returned error can't find the container with id 74157624485feefeed8b92d510ca499133371e207238cbeec7667c3d739174be Jan 07 04:08:39 crc kubenswrapper[4980]: I0107 04:08:39.439991 4980 generic.go:334] "Generic (PLEG): container finished" podID="decf7277-5037-4a9b-a846-9180ef132d40" containerID="5add1da35c10ca05bdab201574adf0f1f7a428e35ec0242d6667eeaa3b3063ce" exitCode=0 Jan 07 04:08:39 crc kubenswrapper[4980]: I0107 04:08:39.440076 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zt9n" event={"ID":"decf7277-5037-4a9b-a846-9180ef132d40","Type":"ContainerDied","Data":"5add1da35c10ca05bdab201574adf0f1f7a428e35ec0242d6667eeaa3b3063ce"} Jan 07 04:08:39 crc kubenswrapper[4980]: I0107 04:08:39.440442 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zt9n" event={"ID":"decf7277-5037-4a9b-a846-9180ef132d40","Type":"ContainerStarted","Data":"74157624485feefeed8b92d510ca499133371e207238cbeec7667c3d739174be"} Jan 07 04:08:41 crc kubenswrapper[4980]: I0107 04:08:41.459390 4980 generic.go:334] "Generic (PLEG): container finished" podID="decf7277-5037-4a9b-a846-9180ef132d40" containerID="3d656567d35de5fc7b5e5822b4a0f6326f543f1df57a32bc9bcf665af07f892a" exitCode=0 Jan 07 04:08:41 crc kubenswrapper[4980]: I0107 04:08:41.459456 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zt9n" event={"ID":"decf7277-5037-4a9b-a846-9180ef132d40","Type":"ContainerDied","Data":"3d656567d35de5fc7b5e5822b4a0f6326f543f1df57a32bc9bcf665af07f892a"} Jan 07 04:08:42 crc kubenswrapper[4980]: I0107 04:08:42.471298 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zt9n" event={"ID":"decf7277-5037-4a9b-a846-9180ef132d40","Type":"ContainerStarted","Data":"4306901bc0dab8e3cfde20a19fb4b07c1eda94026248b9fd660c86d5a39c81ff"} Jan 07 04:08:42 crc kubenswrapper[4980]: I0107 04:08:42.496362 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6zt9n" podStartSLOduration=3.043672919 podStartE2EDuration="5.496343555s" podCreationTimestamp="2026-01-07 04:08:37 +0000 UTC" firstStartedPulling="2026-01-07 04:08:39.442199651 +0000 UTC m=+2166.007894416" lastFinishedPulling="2026-01-07 04:08:41.894870277 +0000 UTC m=+2168.460565052" observedRunningTime="2026-01-07 04:08:42.490336099 +0000 UTC m=+2169.056030844" watchObservedRunningTime="2026-01-07 04:08:42.496343555 +0000 UTC m=+2169.062038300" Jan 07 04:08:48 crc kubenswrapper[4980]: I0107 04:08:48.204958 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6zt9n" Jan 07 04:08:48 crc kubenswrapper[4980]: I0107 04:08:48.205677 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6zt9n" Jan 07 04:08:48 crc kubenswrapper[4980]: I0107 04:08:48.260915 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6zt9n" Jan 07 04:08:48 crc kubenswrapper[4980]: I0107 04:08:48.608894 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6zt9n" Jan 07 04:08:48 crc kubenswrapper[4980]: I0107 04:08:48.675897 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6zt9n"] Jan 07 04:08:50 crc kubenswrapper[4980]: I0107 04:08:50.561089 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6zt9n" podUID="decf7277-5037-4a9b-a846-9180ef132d40" containerName="registry-server" containerID="cri-o://4306901bc0dab8e3cfde20a19fb4b07c1eda94026248b9fd660c86d5a39c81ff" gracePeriod=2 Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.057712 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6zt9n" Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.171371 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psch9\" (UniqueName: \"kubernetes.io/projected/decf7277-5037-4a9b-a846-9180ef132d40-kube-api-access-psch9\") pod \"decf7277-5037-4a9b-a846-9180ef132d40\" (UID: \"decf7277-5037-4a9b-a846-9180ef132d40\") " Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.171725 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decf7277-5037-4a9b-a846-9180ef132d40-catalog-content\") pod \"decf7277-5037-4a9b-a846-9180ef132d40\" (UID: \"decf7277-5037-4a9b-a846-9180ef132d40\") " Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.171808 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decf7277-5037-4a9b-a846-9180ef132d40-utilities\") pod \"decf7277-5037-4a9b-a846-9180ef132d40\" (UID: \"decf7277-5037-4a9b-a846-9180ef132d40\") " Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.173686 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/decf7277-5037-4a9b-a846-9180ef132d40-utilities" (OuterVolumeSpecName: "utilities") pod "decf7277-5037-4a9b-a846-9180ef132d40" (UID: "decf7277-5037-4a9b-a846-9180ef132d40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.177744 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/decf7277-5037-4a9b-a846-9180ef132d40-kube-api-access-psch9" (OuterVolumeSpecName: "kube-api-access-psch9") pod "decf7277-5037-4a9b-a846-9180ef132d40" (UID: "decf7277-5037-4a9b-a846-9180ef132d40"). InnerVolumeSpecName "kube-api-access-psch9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.274527 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decf7277-5037-4a9b-a846-9180ef132d40-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.274760 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psch9\" (UniqueName: \"kubernetes.io/projected/decf7277-5037-4a9b-a846-9180ef132d40-kube-api-access-psch9\") on node \"crc\" DevicePath \"\"" Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.415848 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/decf7277-5037-4a9b-a846-9180ef132d40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "decf7277-5037-4a9b-a846-9180ef132d40" (UID: "decf7277-5037-4a9b-a846-9180ef132d40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.479132 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decf7277-5037-4a9b-a846-9180ef132d40-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.575954 4980 generic.go:334] "Generic (PLEG): container finished" podID="decf7277-5037-4a9b-a846-9180ef132d40" containerID="4306901bc0dab8e3cfde20a19fb4b07c1eda94026248b9fd660c86d5a39c81ff" exitCode=0 Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.576026 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zt9n" event={"ID":"decf7277-5037-4a9b-a846-9180ef132d40","Type":"ContainerDied","Data":"4306901bc0dab8e3cfde20a19fb4b07c1eda94026248b9fd660c86d5a39c81ff"} Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.576052 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6zt9n" Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.576088 4980 scope.go:117] "RemoveContainer" containerID="4306901bc0dab8e3cfde20a19fb4b07c1eda94026248b9fd660c86d5a39c81ff" Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.576070 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zt9n" event={"ID":"decf7277-5037-4a9b-a846-9180ef132d40","Type":"ContainerDied","Data":"74157624485feefeed8b92d510ca499133371e207238cbeec7667c3d739174be"} Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.609692 4980 scope.go:117] "RemoveContainer" containerID="3d656567d35de5fc7b5e5822b4a0f6326f543f1df57a32bc9bcf665af07f892a" Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.637678 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6zt9n"] Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.649673 4980 scope.go:117] "RemoveContainer" containerID="5add1da35c10ca05bdab201574adf0f1f7a428e35ec0242d6667eeaa3b3063ce" Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.650490 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6zt9n"] Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.722367 4980 scope.go:117] "RemoveContainer" containerID="4306901bc0dab8e3cfde20a19fb4b07c1eda94026248b9fd660c86d5a39c81ff" Jan 07 04:08:51 crc kubenswrapper[4980]: E0107 04:08:51.723025 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4306901bc0dab8e3cfde20a19fb4b07c1eda94026248b9fd660c86d5a39c81ff\": container with ID starting with 4306901bc0dab8e3cfde20a19fb4b07c1eda94026248b9fd660c86d5a39c81ff not found: ID does not exist" containerID="4306901bc0dab8e3cfde20a19fb4b07c1eda94026248b9fd660c86d5a39c81ff" Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.723121 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4306901bc0dab8e3cfde20a19fb4b07c1eda94026248b9fd660c86d5a39c81ff"} err="failed to get container status \"4306901bc0dab8e3cfde20a19fb4b07c1eda94026248b9fd660c86d5a39c81ff\": rpc error: code = NotFound desc = could not find container \"4306901bc0dab8e3cfde20a19fb4b07c1eda94026248b9fd660c86d5a39c81ff\": container with ID starting with 4306901bc0dab8e3cfde20a19fb4b07c1eda94026248b9fd660c86d5a39c81ff not found: ID does not exist" Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.723238 4980 scope.go:117] "RemoveContainer" containerID="3d656567d35de5fc7b5e5822b4a0f6326f543f1df57a32bc9bcf665af07f892a" Jan 07 04:08:51 crc kubenswrapper[4980]: E0107 04:08:51.723950 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d656567d35de5fc7b5e5822b4a0f6326f543f1df57a32bc9bcf665af07f892a\": container with ID starting with 3d656567d35de5fc7b5e5822b4a0f6326f543f1df57a32bc9bcf665af07f892a not found: ID does not exist" containerID="3d656567d35de5fc7b5e5822b4a0f6326f543f1df57a32bc9bcf665af07f892a" Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.724030 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d656567d35de5fc7b5e5822b4a0f6326f543f1df57a32bc9bcf665af07f892a"} err="failed to get container status \"3d656567d35de5fc7b5e5822b4a0f6326f543f1df57a32bc9bcf665af07f892a\": rpc error: code = NotFound desc = could not find container \"3d656567d35de5fc7b5e5822b4a0f6326f543f1df57a32bc9bcf665af07f892a\": container with ID starting with 3d656567d35de5fc7b5e5822b4a0f6326f543f1df57a32bc9bcf665af07f892a not found: ID does not exist" Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.724085 4980 scope.go:117] "RemoveContainer" containerID="5add1da35c10ca05bdab201574adf0f1f7a428e35ec0242d6667eeaa3b3063ce" Jan 07 04:08:51 crc kubenswrapper[4980]: E0107 04:08:51.724893 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5add1da35c10ca05bdab201574adf0f1f7a428e35ec0242d6667eeaa3b3063ce\": container with ID starting with 5add1da35c10ca05bdab201574adf0f1f7a428e35ec0242d6667eeaa3b3063ce not found: ID does not exist" containerID="5add1da35c10ca05bdab201574adf0f1f7a428e35ec0242d6667eeaa3b3063ce" Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.724938 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5add1da35c10ca05bdab201574adf0f1f7a428e35ec0242d6667eeaa3b3063ce"} err="failed to get container status \"5add1da35c10ca05bdab201574adf0f1f7a428e35ec0242d6667eeaa3b3063ce\": rpc error: code = NotFound desc = could not find container \"5add1da35c10ca05bdab201574adf0f1f7a428e35ec0242d6667eeaa3b3063ce\": container with ID starting with 5add1da35c10ca05bdab201574adf0f1f7a428e35ec0242d6667eeaa3b3063ce not found: ID does not exist" Jan 07 04:08:51 crc kubenswrapper[4980]: I0107 04:08:51.757031 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="decf7277-5037-4a9b-a846-9180ef132d40" path="/var/lib/kubelet/pods/decf7277-5037-4a9b-a846-9180ef132d40/volumes" Jan 07 04:09:06 crc kubenswrapper[4980]: I0107 04:09:06.543011 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:09:06 crc kubenswrapper[4980]: I0107 04:09:06.545121 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:09:36 crc kubenswrapper[4980]: I0107 04:09:36.543671 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:09:36 crc kubenswrapper[4980]: I0107 04:09:36.544328 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:09:36 crc kubenswrapper[4980]: I0107 04:09:36.544389 4980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 04:09:36 crc kubenswrapper[4980]: I0107 04:09:36.545221 4980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e"} pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 04:09:36 crc kubenswrapper[4980]: I0107 04:09:36.545342 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" containerID="cri-o://d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" gracePeriod=600 Jan 07 04:09:36 crc kubenswrapper[4980]: E0107 04:09:36.781223 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:09:37 crc kubenswrapper[4980]: I0107 04:09:37.058060 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" exitCode=0 Jan 07 04:09:37 crc kubenswrapper[4980]: I0107 04:09:37.058110 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerDied","Data":"d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e"} Jan 07 04:09:37 crc kubenswrapper[4980]: I0107 04:09:37.058143 4980 scope.go:117] "RemoveContainer" containerID="d1ed879a83db37a20bc52acd9903f3aabf2339db3829b25c95e9a6e9ed19722c" Jan 07 04:09:37 crc kubenswrapper[4980]: I0107 04:09:37.058688 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:09:37 crc kubenswrapper[4980]: E0107 04:09:37.058909 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:09:49 crc kubenswrapper[4980]: I0107 04:09:49.735755 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:09:49 crc kubenswrapper[4980]: E0107 04:09:49.737045 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:10:04 crc kubenswrapper[4980]: I0107 04:10:04.736022 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:10:04 crc kubenswrapper[4980]: E0107 04:10:04.736820 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:10:19 crc kubenswrapper[4980]: I0107 04:10:19.736763 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:10:19 crc kubenswrapper[4980]: E0107 04:10:19.738146 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:10:23 crc kubenswrapper[4980]: I0107 04:10:23.624529 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qbmdx"] Jan 07 04:10:23 crc kubenswrapper[4980]: E0107 04:10:23.625491 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="decf7277-5037-4a9b-a846-9180ef132d40" containerName="extract-utilities" Jan 07 04:10:23 crc kubenswrapper[4980]: I0107 04:10:23.625507 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="decf7277-5037-4a9b-a846-9180ef132d40" containerName="extract-utilities" Jan 07 04:10:23 crc kubenswrapper[4980]: E0107 04:10:23.625538 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="decf7277-5037-4a9b-a846-9180ef132d40" containerName="registry-server" Jan 07 04:10:23 crc kubenswrapper[4980]: I0107 04:10:23.625546 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="decf7277-5037-4a9b-a846-9180ef132d40" containerName="registry-server" Jan 07 04:10:23 crc kubenswrapper[4980]: E0107 04:10:23.625583 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="decf7277-5037-4a9b-a846-9180ef132d40" containerName="extract-content" Jan 07 04:10:23 crc kubenswrapper[4980]: I0107 04:10:23.625592 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="decf7277-5037-4a9b-a846-9180ef132d40" containerName="extract-content" Jan 07 04:10:23 crc kubenswrapper[4980]: I0107 04:10:23.625830 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="decf7277-5037-4a9b-a846-9180ef132d40" containerName="registry-server" Jan 07 04:10:23 crc kubenswrapper[4980]: I0107 04:10:23.627543 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbmdx" Jan 07 04:10:23 crc kubenswrapper[4980]: I0107 04:10:23.638007 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qbmdx"] Jan 07 04:10:23 crc kubenswrapper[4980]: I0107 04:10:23.731427 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce-utilities\") pod \"community-operators-qbmdx\" (UID: \"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce\") " pod="openshift-marketplace/community-operators-qbmdx" Jan 07 04:10:23 crc kubenswrapper[4980]: I0107 04:10:23.732022 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce-catalog-content\") pod \"community-operators-qbmdx\" (UID: \"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce\") " pod="openshift-marketplace/community-operators-qbmdx" Jan 07 04:10:23 crc kubenswrapper[4980]: I0107 04:10:23.732212 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljjnh\" (UniqueName: \"kubernetes.io/projected/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce-kube-api-access-ljjnh\") pod \"community-operators-qbmdx\" (UID: \"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce\") " pod="openshift-marketplace/community-operators-qbmdx" Jan 07 04:10:23 crc kubenswrapper[4980]: I0107 04:10:23.833466 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce-utilities\") pod \"community-operators-qbmdx\" (UID: \"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce\") " pod="openshift-marketplace/community-operators-qbmdx" Jan 07 04:10:23 crc kubenswrapper[4980]: I0107 04:10:23.833548 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce-catalog-content\") pod \"community-operators-qbmdx\" (UID: \"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce\") " pod="openshift-marketplace/community-operators-qbmdx" Jan 07 04:10:23 crc kubenswrapper[4980]: I0107 04:10:23.834024 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljjnh\" (UniqueName: \"kubernetes.io/projected/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce-kube-api-access-ljjnh\") pod \"community-operators-qbmdx\" (UID: \"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce\") " pod="openshift-marketplace/community-operators-qbmdx" Jan 07 04:10:23 crc kubenswrapper[4980]: I0107 04:10:23.834298 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce-catalog-content\") pod \"community-operators-qbmdx\" (UID: \"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce\") " pod="openshift-marketplace/community-operators-qbmdx" Jan 07 04:10:23 crc kubenswrapper[4980]: I0107 04:10:23.835056 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce-utilities\") pod \"community-operators-qbmdx\" (UID: \"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce\") " pod="openshift-marketplace/community-operators-qbmdx" Jan 07 04:10:23 crc kubenswrapper[4980]: I0107 04:10:23.858802 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljjnh\" (UniqueName: \"kubernetes.io/projected/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce-kube-api-access-ljjnh\") pod \"community-operators-qbmdx\" (UID: \"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce\") " pod="openshift-marketplace/community-operators-qbmdx" Jan 07 04:10:23 crc kubenswrapper[4980]: I0107 04:10:23.957740 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbmdx" Jan 07 04:10:24 crc kubenswrapper[4980]: I0107 04:10:24.536153 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qbmdx"] Jan 07 04:10:24 crc kubenswrapper[4980]: I0107 04:10:24.677898 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbmdx" event={"ID":"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce","Type":"ContainerStarted","Data":"23e5e4859f98c303b18a6e6aed9b69c4c3863126be5eb66a6a8e4f0966f3b42b"} Jan 07 04:10:25 crc kubenswrapper[4980]: I0107 04:10:25.692273 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbmdx" event={"ID":"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce","Type":"ContainerDied","Data":"ee9d7fec32aa1b7c9921c7989be2be4d1e25698cf0e4f2389bad409428a03fc6"} Jan 07 04:10:25 crc kubenswrapper[4980]: I0107 04:10:25.692305 4980 generic.go:334] "Generic (PLEG): container finished" podID="8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce" containerID="ee9d7fec32aa1b7c9921c7989be2be4d1e25698cf0e4f2389bad409428a03fc6" exitCode=0 Jan 07 04:10:25 crc kubenswrapper[4980]: I0107 04:10:25.694953 4980 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 04:10:27 crc kubenswrapper[4980]: I0107 04:10:27.720669 4980 generic.go:334] "Generic (PLEG): container finished" podID="8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce" containerID="d630f9aa4e75a6543802e1b1030846357eeae35708f045b0c0280df646e6c9e4" exitCode=0 Jan 07 04:10:27 crc kubenswrapper[4980]: I0107 04:10:27.720741 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbmdx" event={"ID":"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce","Type":"ContainerDied","Data":"d630f9aa4e75a6543802e1b1030846357eeae35708f045b0c0280df646e6c9e4"} Jan 07 04:10:28 crc kubenswrapper[4980]: I0107 04:10:28.737626 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbmdx" event={"ID":"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce","Type":"ContainerStarted","Data":"34c504237345e10c774fc55f95cb3c5e85200bc6ccbac5efe6c4941fd59f07fc"} Jan 07 04:10:33 crc kubenswrapper[4980]: I0107 04:10:33.958703 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qbmdx" Jan 07 04:10:33 crc kubenswrapper[4980]: I0107 04:10:33.959385 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qbmdx" Jan 07 04:10:34 crc kubenswrapper[4980]: I0107 04:10:34.044780 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qbmdx" Jan 07 04:10:34 crc kubenswrapper[4980]: I0107 04:10:34.083442 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qbmdx" podStartSLOduration=8.623640209 podStartE2EDuration="11.083417166s" podCreationTimestamp="2026-01-07 04:10:23 +0000 UTC" firstStartedPulling="2026-01-07 04:10:25.694500515 +0000 UTC m=+2272.260195280" lastFinishedPulling="2026-01-07 04:10:28.154277472 +0000 UTC m=+2274.719972237" observedRunningTime="2026-01-07 04:10:28.771830839 +0000 UTC m=+2275.337525614" watchObservedRunningTime="2026-01-07 04:10:34.083417166 +0000 UTC m=+2280.649111931" Jan 07 04:10:34 crc kubenswrapper[4980]: I0107 04:10:34.735853 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:10:34 crc kubenswrapper[4980]: E0107 04:10:34.736319 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:10:34 crc kubenswrapper[4980]: I0107 04:10:34.911500 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qbmdx" Jan 07 04:10:34 crc kubenswrapper[4980]: I0107 04:10:34.995974 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qbmdx"] Jan 07 04:10:36 crc kubenswrapper[4980]: I0107 04:10:36.857380 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qbmdx" podUID="8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce" containerName="registry-server" containerID="cri-o://34c504237345e10c774fc55f95cb3c5e85200bc6ccbac5efe6c4941fd59f07fc" gracePeriod=2 Jan 07 04:10:37 crc kubenswrapper[4980]: I0107 04:10:37.871359 4980 generic.go:334] "Generic (PLEG): container finished" podID="8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce" containerID="34c504237345e10c774fc55f95cb3c5e85200bc6ccbac5efe6c4941fd59f07fc" exitCode=0 Jan 07 04:10:37 crc kubenswrapper[4980]: I0107 04:10:37.871432 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbmdx" event={"ID":"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce","Type":"ContainerDied","Data":"34c504237345e10c774fc55f95cb3c5e85200bc6ccbac5efe6c4941fd59f07fc"} Jan 07 04:10:37 crc kubenswrapper[4980]: I0107 04:10:37.871791 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbmdx" event={"ID":"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce","Type":"ContainerDied","Data":"23e5e4859f98c303b18a6e6aed9b69c4c3863126be5eb66a6a8e4f0966f3b42b"} Jan 07 04:10:37 crc kubenswrapper[4980]: I0107 04:10:37.871818 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23e5e4859f98c303b18a6e6aed9b69c4c3863126be5eb66a6a8e4f0966f3b42b" Jan 07 04:10:37 crc kubenswrapper[4980]: I0107 04:10:37.939636 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbmdx" Jan 07 04:10:38 crc kubenswrapper[4980]: I0107 04:10:38.069493 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljjnh\" (UniqueName: \"kubernetes.io/projected/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce-kube-api-access-ljjnh\") pod \"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce\" (UID: \"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce\") " Jan 07 04:10:38 crc kubenswrapper[4980]: I0107 04:10:38.070303 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce-utilities\") pod \"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce\" (UID: \"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce\") " Jan 07 04:10:38 crc kubenswrapper[4980]: I0107 04:10:38.070506 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce-catalog-content\") pod \"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce\" (UID: \"8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce\") " Jan 07 04:10:38 crc kubenswrapper[4980]: I0107 04:10:38.071932 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce-utilities" (OuterVolumeSpecName: "utilities") pod "8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce" (UID: "8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:10:38 crc kubenswrapper[4980]: I0107 04:10:38.079212 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce-kube-api-access-ljjnh" (OuterVolumeSpecName: "kube-api-access-ljjnh") pod "8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce" (UID: "8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce"). InnerVolumeSpecName "kube-api-access-ljjnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:10:38 crc kubenswrapper[4980]: I0107 04:10:38.081231 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 04:10:38 crc kubenswrapper[4980]: I0107 04:10:38.081287 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljjnh\" (UniqueName: \"kubernetes.io/projected/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce-kube-api-access-ljjnh\") on node \"crc\" DevicePath \"\"" Jan 07 04:10:38 crc kubenswrapper[4980]: I0107 04:10:38.148939 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce" (UID: "8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:10:38 crc kubenswrapper[4980]: I0107 04:10:38.183355 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 04:10:38 crc kubenswrapper[4980]: I0107 04:10:38.883457 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbmdx" Jan 07 04:10:38 crc kubenswrapper[4980]: I0107 04:10:38.936995 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qbmdx"] Jan 07 04:10:38 crc kubenswrapper[4980]: I0107 04:10:38.949639 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qbmdx"] Jan 07 04:10:39 crc kubenswrapper[4980]: I0107 04:10:39.756647 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce" path="/var/lib/kubelet/pods/8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce/volumes" Jan 07 04:10:46 crc kubenswrapper[4980]: I0107 04:10:46.739644 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:10:46 crc kubenswrapper[4980]: E0107 04:10:46.741332 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:11:01 crc kubenswrapper[4980]: I0107 04:11:01.737575 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:11:01 crc kubenswrapper[4980]: E0107 04:11:01.738710 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:11:13 crc kubenswrapper[4980]: I0107 04:11:13.755331 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:11:13 crc kubenswrapper[4980]: E0107 04:11:13.756496 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:11:27 crc kubenswrapper[4980]: I0107 04:11:27.736245 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:11:27 crc kubenswrapper[4980]: E0107 04:11:27.737338 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:11:38 crc kubenswrapper[4980]: I0107 04:11:38.736096 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:11:38 crc kubenswrapper[4980]: E0107 04:11:38.737221 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:11:51 crc kubenswrapper[4980]: I0107 04:11:51.737087 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:11:51 crc kubenswrapper[4980]: E0107 04:11:51.738254 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:12:03 crc kubenswrapper[4980]: I0107 04:12:03.747078 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:12:03 crc kubenswrapper[4980]: E0107 04:12:03.748379 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:12:14 crc kubenswrapper[4980]: I0107 04:12:14.736451 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:12:14 crc kubenswrapper[4980]: E0107 04:12:14.737658 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:12:25 crc kubenswrapper[4980]: I0107 04:12:25.158998 4980 generic.go:334] "Generic (PLEG): container finished" podID="952aa7ac-68e0-4f49-bd80-407e2181fa05" containerID="cd1c7e9ac20ffeaa449ecbdd3094f0597708f8172e023322227a8d208b523f8a" exitCode=0 Jan 07 04:12:25 crc kubenswrapper[4980]: I0107 04:12:25.159702 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" event={"ID":"952aa7ac-68e0-4f49-bd80-407e2181fa05","Type":"ContainerDied","Data":"cd1c7e9ac20ffeaa449ecbdd3094f0597708f8172e023322227a8d208b523f8a"} Jan 07 04:12:26 crc kubenswrapper[4980]: I0107 04:12:26.736427 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:12:26 crc kubenswrapper[4980]: E0107 04:12:26.737105 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:12:26 crc kubenswrapper[4980]: I0107 04:12:26.763280 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:12:26 crc kubenswrapper[4980]: I0107 04:12:26.887613 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-ssh-key-openstack-edpm-ipam\") pod \"952aa7ac-68e0-4f49-bd80-407e2181fa05\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " Jan 07 04:12:26 crc kubenswrapper[4980]: I0107 04:12:26.887654 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lrxj\" (UniqueName: \"kubernetes.io/projected/952aa7ac-68e0-4f49-bd80-407e2181fa05-kube-api-access-2lrxj\") pod \"952aa7ac-68e0-4f49-bd80-407e2181fa05\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " Jan 07 04:12:26 crc kubenswrapper[4980]: I0107 04:12:26.887710 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-inventory\") pod \"952aa7ac-68e0-4f49-bd80-407e2181fa05\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " Jan 07 04:12:26 crc kubenswrapper[4980]: I0107 04:12:26.887751 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-libvirt-combined-ca-bundle\") pod \"952aa7ac-68e0-4f49-bd80-407e2181fa05\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " Jan 07 04:12:26 crc kubenswrapper[4980]: I0107 04:12:26.887816 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-libvirt-secret-0\") pod \"952aa7ac-68e0-4f49-bd80-407e2181fa05\" (UID: \"952aa7ac-68e0-4f49-bd80-407e2181fa05\") " Jan 07 04:12:26 crc kubenswrapper[4980]: I0107 04:12:26.897779 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/952aa7ac-68e0-4f49-bd80-407e2181fa05-kube-api-access-2lrxj" (OuterVolumeSpecName: "kube-api-access-2lrxj") pod "952aa7ac-68e0-4f49-bd80-407e2181fa05" (UID: "952aa7ac-68e0-4f49-bd80-407e2181fa05"). InnerVolumeSpecName "kube-api-access-2lrxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:12:26 crc kubenswrapper[4980]: I0107 04:12:26.902989 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "952aa7ac-68e0-4f49-bd80-407e2181fa05" (UID: "952aa7ac-68e0-4f49-bd80-407e2181fa05"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:12:26 crc kubenswrapper[4980]: I0107 04:12:26.925703 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-inventory" (OuterVolumeSpecName: "inventory") pod "952aa7ac-68e0-4f49-bd80-407e2181fa05" (UID: "952aa7ac-68e0-4f49-bd80-407e2181fa05"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:12:26 crc kubenswrapper[4980]: I0107 04:12:26.926658 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "952aa7ac-68e0-4f49-bd80-407e2181fa05" (UID: "952aa7ac-68e0-4f49-bd80-407e2181fa05"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:12:26 crc kubenswrapper[4980]: I0107 04:12:26.928304 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "952aa7ac-68e0-4f49-bd80-407e2181fa05" (UID: "952aa7ac-68e0-4f49-bd80-407e2181fa05"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:12:26 crc kubenswrapper[4980]: I0107 04:12:26.990324 4980 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 07 04:12:26 crc kubenswrapper[4980]: I0107 04:12:26.990358 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lrxj\" (UniqueName: \"kubernetes.io/projected/952aa7ac-68e0-4f49-bd80-407e2181fa05-kube-api-access-2lrxj\") on node \"crc\" DevicePath \"\"" Jan 07 04:12:26 crc kubenswrapper[4980]: I0107 04:12:26.990367 4980 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-inventory\") on node \"crc\" DevicePath \"\"" Jan 07 04:12:26 crc kubenswrapper[4980]: I0107 04:12:26.990377 4980 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 04:12:26 crc kubenswrapper[4980]: I0107 04:12:26.990386 4980 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/952aa7ac-68e0-4f49-bd80-407e2181fa05-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.179889 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" event={"ID":"952aa7ac-68e0-4f49-bd80-407e2181fa05","Type":"ContainerDied","Data":"65d680fea4e3c721f67695e18b75e1f55aa996ae6d553f9460943515512cf407"} Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.179928 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bngfg" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.179945 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65d680fea4e3c721f67695e18b75e1f55aa996ae6d553f9460943515512cf407" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.284299 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj"] Jan 07 04:12:27 crc kubenswrapper[4980]: E0107 04:12:27.284654 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce" containerName="extract-utilities" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.284669 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce" containerName="extract-utilities" Jan 07 04:12:27 crc kubenswrapper[4980]: E0107 04:12:27.284707 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952aa7ac-68e0-4f49-bd80-407e2181fa05" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.284714 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="952aa7ac-68e0-4f49-bd80-407e2181fa05" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 07 04:12:27 crc kubenswrapper[4980]: E0107 04:12:27.284730 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce" containerName="registry-server" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.284736 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce" containerName="registry-server" Jan 07 04:12:27 crc kubenswrapper[4980]: E0107 04:12:27.284749 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce" containerName="extract-content" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.284755 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce" containerName="extract-content" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.284914 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="952aa7ac-68e0-4f49-bd80-407e2181fa05" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.284937 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f00d0c0-0cf7-49c8-84ef-e2d52fd013ce" containerName="registry-server" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.285477 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.289304 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.289323 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.289429 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.289747 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.289945 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.290035 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7ttf" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.290105 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.301935 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj"] Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.396989 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.397049 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.397219 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.397287 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.397505 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.397626 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w67ct\" (UniqueName: \"kubernetes.io/projected/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-kube-api-access-w67ct\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.397703 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.397791 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.397893 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.500042 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.500398 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.500453 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.500482 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.500536 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.500582 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.500635 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.500675 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w67ct\" (UniqueName: \"kubernetes.io/projected/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-kube-api-access-w67ct\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.500730 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.503242 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.510040 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.510274 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.510721 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.513112 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.513287 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.520217 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.524203 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.530299 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w67ct\" (UniqueName: \"kubernetes.io/projected/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-kube-api-access-w67ct\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w2gvj\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:27 crc kubenswrapper[4980]: I0107 04:12:27.600379 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:12:28 crc kubenswrapper[4980]: I0107 04:12:28.216915 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj"] Jan 07 04:12:29 crc kubenswrapper[4980]: I0107 04:12:29.205186 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" event={"ID":"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e","Type":"ContainerStarted","Data":"c2a20977c8accf39c125640a4979c2021f39209d9af7bacd7268018c7895aca0"} Jan 07 04:12:29 crc kubenswrapper[4980]: I0107 04:12:29.239002 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" podStartSLOduration=1.604308318 podStartE2EDuration="2.238970339s" podCreationTimestamp="2026-01-07 04:12:27 +0000 UTC" firstStartedPulling="2026-01-07 04:12:28.244727173 +0000 UTC m=+2394.810421918" lastFinishedPulling="2026-01-07 04:12:28.879389204 +0000 UTC m=+2395.445083939" observedRunningTime="2026-01-07 04:12:29.230051072 +0000 UTC m=+2395.795745807" watchObservedRunningTime="2026-01-07 04:12:29.238970339 +0000 UTC m=+2395.804665104" Jan 07 04:12:30 crc kubenswrapper[4980]: I0107 04:12:30.218841 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" event={"ID":"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e","Type":"ContainerStarted","Data":"5255e3da5e3131619ba7dcb42e5975f32af4135e021db344bb8f64764f0f67fb"} Jan 07 04:12:38 crc kubenswrapper[4980]: I0107 04:12:38.736433 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:12:38 crc kubenswrapper[4980]: E0107 04:12:38.737211 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:12:52 crc kubenswrapper[4980]: I0107 04:12:52.735486 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:12:52 crc kubenswrapper[4980]: E0107 04:12:52.736491 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:13:07 crc kubenswrapper[4980]: I0107 04:13:07.736225 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:13:07 crc kubenswrapper[4980]: E0107 04:13:07.736918 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:13:21 crc kubenswrapper[4980]: I0107 04:13:21.736302 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:13:21 crc kubenswrapper[4980]: E0107 04:13:21.737367 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:13:33 crc kubenswrapper[4980]: I0107 04:13:33.749463 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:13:33 crc kubenswrapper[4980]: E0107 04:13:33.750513 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:13:47 crc kubenswrapper[4980]: I0107 04:13:47.736378 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:13:47 crc kubenswrapper[4980]: E0107 04:13:47.737639 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:13:59 crc kubenswrapper[4980]: I0107 04:13:59.735274 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:13:59 crc kubenswrapper[4980]: E0107 04:13:59.736084 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:14:12 crc kubenswrapper[4980]: I0107 04:14:12.736958 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:14:12 crc kubenswrapper[4980]: E0107 04:14:12.737623 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:14:24 crc kubenswrapper[4980]: I0107 04:14:24.735858 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:14:24 crc kubenswrapper[4980]: E0107 04:14:24.737081 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:14:35 crc kubenswrapper[4980]: I0107 04:14:35.737008 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:14:35 crc kubenswrapper[4980]: E0107 04:14:35.738303 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:14:46 crc kubenswrapper[4980]: I0107 04:14:46.736525 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:14:47 crc kubenswrapper[4980]: I0107 04:14:47.836925 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"09c860ca894a226a129a0988a78a99e28eb4b51d74ecc7cd9b27a6746346462b"} Jan 07 04:15:00 crc kubenswrapper[4980]: I0107 04:15:00.179499 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9"] Jan 07 04:15:00 crc kubenswrapper[4980]: I0107 04:15:00.183519 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9" Jan 07 04:15:00 crc kubenswrapper[4980]: I0107 04:15:00.186942 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 07 04:15:00 crc kubenswrapper[4980]: I0107 04:15:00.187351 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 07 04:15:00 crc kubenswrapper[4980]: I0107 04:15:00.195897 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9"] Jan 07 04:15:00 crc kubenswrapper[4980]: I0107 04:15:00.380290 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f-secret-volume\") pod \"collect-profiles-29462655-jpcs9\" (UID: \"c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9" Jan 07 04:15:00 crc kubenswrapper[4980]: I0107 04:15:00.380927 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f-config-volume\") pod \"collect-profiles-29462655-jpcs9\" (UID: \"c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9" Jan 07 04:15:00 crc kubenswrapper[4980]: I0107 04:15:00.381879 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhcsz\" (UniqueName: \"kubernetes.io/projected/c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f-kube-api-access-hhcsz\") pod \"collect-profiles-29462655-jpcs9\" (UID: \"c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9" Jan 07 04:15:00 crc kubenswrapper[4980]: I0107 04:15:00.483233 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhcsz\" (UniqueName: \"kubernetes.io/projected/c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f-kube-api-access-hhcsz\") pod \"collect-profiles-29462655-jpcs9\" (UID: \"c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9" Jan 07 04:15:00 crc kubenswrapper[4980]: I0107 04:15:00.483320 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f-secret-volume\") pod \"collect-profiles-29462655-jpcs9\" (UID: \"c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9" Jan 07 04:15:00 crc kubenswrapper[4980]: I0107 04:15:00.483426 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f-config-volume\") pod \"collect-profiles-29462655-jpcs9\" (UID: \"c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9" Jan 07 04:15:00 crc kubenswrapper[4980]: I0107 04:15:00.484282 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f-config-volume\") pod \"collect-profiles-29462655-jpcs9\" (UID: \"c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9" Jan 07 04:15:00 crc kubenswrapper[4980]: I0107 04:15:00.490646 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f-secret-volume\") pod \"collect-profiles-29462655-jpcs9\" (UID: \"c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9" Jan 07 04:15:00 crc kubenswrapper[4980]: I0107 04:15:00.510086 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhcsz\" (UniqueName: \"kubernetes.io/projected/c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f-kube-api-access-hhcsz\") pod \"collect-profiles-29462655-jpcs9\" (UID: \"c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9" Jan 07 04:15:00 crc kubenswrapper[4980]: I0107 04:15:00.514340 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9" Jan 07 04:15:00 crc kubenswrapper[4980]: I0107 04:15:00.837017 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9"] Jan 07 04:15:00 crc kubenswrapper[4980]: I0107 04:15:00.998046 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9" event={"ID":"c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f","Type":"ContainerStarted","Data":"588f4502421c787c7b90165a7254d66a03d999886cc38d89d3ccf354c9af1582"} Jan 07 04:15:02 crc kubenswrapper[4980]: I0107 04:15:02.010917 4980 generic.go:334] "Generic (PLEG): container finished" podID="c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f" containerID="923c82a1c2466f6d83957ef6eeff026a21aaffdade3da56b1af99bea8a115932" exitCode=0 Jan 07 04:15:02 crc kubenswrapper[4980]: I0107 04:15:02.011047 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9" event={"ID":"c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f","Type":"ContainerDied","Data":"923c82a1c2466f6d83957ef6eeff026a21aaffdade3da56b1af99bea8a115932"} Jan 07 04:15:03 crc kubenswrapper[4980]: I0107 04:15:03.396679 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9" Jan 07 04:15:03 crc kubenswrapper[4980]: I0107 04:15:03.549102 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhcsz\" (UniqueName: \"kubernetes.io/projected/c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f-kube-api-access-hhcsz\") pod \"c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f\" (UID: \"c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f\") " Jan 07 04:15:03 crc kubenswrapper[4980]: I0107 04:15:03.549175 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f-config-volume\") pod \"c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f\" (UID: \"c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f\") " Jan 07 04:15:03 crc kubenswrapper[4980]: I0107 04:15:03.549294 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f-secret-volume\") pod \"c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f\" (UID: \"c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f\") " Jan 07 04:15:03 crc kubenswrapper[4980]: I0107 04:15:03.549909 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f-config-volume" (OuterVolumeSpecName: "config-volume") pod "c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f" (UID: "c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 04:15:03 crc kubenswrapper[4980]: I0107 04:15:03.550404 4980 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 07 04:15:03 crc kubenswrapper[4980]: I0107 04:15:03.555919 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f-kube-api-access-hhcsz" (OuterVolumeSpecName: "kube-api-access-hhcsz") pod "c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f" (UID: "c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f"). InnerVolumeSpecName "kube-api-access-hhcsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:15:03 crc kubenswrapper[4980]: I0107 04:15:03.556414 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f" (UID: "c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:15:03 crc kubenswrapper[4980]: I0107 04:15:03.651973 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhcsz\" (UniqueName: \"kubernetes.io/projected/c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f-kube-api-access-hhcsz\") on node \"crc\" DevicePath \"\"" Jan 07 04:15:03 crc kubenswrapper[4980]: I0107 04:15:03.652024 4980 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 07 04:15:04 crc kubenswrapper[4980]: I0107 04:15:04.033632 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9" event={"ID":"c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f","Type":"ContainerDied","Data":"588f4502421c787c7b90165a7254d66a03d999886cc38d89d3ccf354c9af1582"} Jan 07 04:15:04 crc kubenswrapper[4980]: I0107 04:15:04.034150 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="588f4502421c787c7b90165a7254d66a03d999886cc38d89d3ccf354c9af1582" Jan 07 04:15:04 crc kubenswrapper[4980]: I0107 04:15:04.033693 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462655-jpcs9" Jan 07 04:15:04 crc kubenswrapper[4980]: I0107 04:15:04.503116 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr"] Jan 07 04:15:04 crc kubenswrapper[4980]: I0107 04:15:04.513148 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462610-54bwr"] Jan 07 04:15:05 crc kubenswrapper[4980]: I0107 04:15:05.755750 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a5331ee-46c7-4826-85cf-3c57f25f1d6c" path="/var/lib/kubelet/pods/7a5331ee-46c7-4826-85cf-3c57f25f1d6c/volumes" Jan 07 04:15:16 crc kubenswrapper[4980]: I0107 04:15:16.202339 4980 generic.go:334] "Generic (PLEG): container finished" podID="b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e" containerID="5255e3da5e3131619ba7dcb42e5975f32af4135e021db344bb8f64764f0f67fb" exitCode=0 Jan 07 04:15:16 crc kubenswrapper[4980]: I0107 04:15:16.202464 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" event={"ID":"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e","Type":"ContainerDied","Data":"5255e3da5e3131619ba7dcb42e5975f32af4135e021db344bb8f64764f0f67fb"} Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.757283 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.876050 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-migration-ssh-key-0\") pod \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.876138 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w67ct\" (UniqueName: \"kubernetes.io/projected/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-kube-api-access-w67ct\") pod \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.876181 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-cell1-compute-config-0\") pod \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.876270 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-ssh-key-openstack-edpm-ipam\") pod \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.876385 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-combined-ca-bundle\") pod \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.876493 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-extra-config-0\") pod \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.876519 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-migration-ssh-key-1\") pod \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.876550 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-inventory\") pod \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.876594 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-cell1-compute-config-1\") pod \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\" (UID: \"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e\") " Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.881453 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e" (UID: "b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.903729 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e" (UID: "b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.905627 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-kube-api-access-w67ct" (OuterVolumeSpecName: "kube-api-access-w67ct") pod "b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e" (UID: "b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e"). InnerVolumeSpecName "kube-api-access-w67ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.905720 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e" (UID: "b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.908299 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e" (UID: "b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.910596 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e" (UID: "b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.930448 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e" (UID: "b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.930514 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e" (UID: "b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.932822 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-inventory" (OuterVolumeSpecName: "inventory") pod "b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e" (UID: "b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.979029 4980 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.979073 4980 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.979089 4980 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-inventory\") on node \"crc\" DevicePath \"\"" Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.979103 4980 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.979116 4980 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.979127 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w67ct\" (UniqueName: \"kubernetes.io/projected/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-kube-api-access-w67ct\") on node \"crc\" DevicePath \"\"" Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.979140 4980 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.979151 4980 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 07 04:15:17 crc kubenswrapper[4980]: I0107 04:15:17.979164 4980 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.223935 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" event={"ID":"b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e","Type":"ContainerDied","Data":"c2a20977c8accf39c125640a4979c2021f39209d9af7bacd7268018c7895aca0"} Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.223975 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2a20977c8accf39c125640a4979c2021f39209d9af7bacd7268018c7895aca0" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.224037 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w2gvj" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.338262 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb"] Jan 07 04:15:18 crc kubenswrapper[4980]: E0107 04:15:18.338707 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f" containerName="collect-profiles" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.338728 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f" containerName="collect-profiles" Jan 07 04:15:18 crc kubenswrapper[4980]: E0107 04:15:18.338759 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.338766 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.338980 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5dd2dc2-4ba7-4eae-8d2b-1111b9e7ea1f" containerName="collect-profiles" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.339013 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.339735 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.346489 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.347280 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.348285 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.348863 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7ttf" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.351604 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.354196 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb"] Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.486583 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.486679 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.486733 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-286ld\" (UniqueName: \"kubernetes.io/projected/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-kube-api-access-286ld\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.486976 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.487043 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.487154 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.487213 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.589332 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.589414 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.590348 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.590399 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.590628 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.590678 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.590721 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-286ld\" (UniqueName: \"kubernetes.io/projected/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-kube-api-access-286ld\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.598591 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.599702 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.599857 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.600452 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.601007 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.603511 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.620306 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-286ld\" (UniqueName: \"kubernetes.io/projected/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-kube-api-access-286ld\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:18 crc kubenswrapper[4980]: I0107 04:15:18.656747 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:15:19 crc kubenswrapper[4980]: I0107 04:15:19.263466 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb"] Jan 07 04:15:20 crc kubenswrapper[4980]: I0107 04:15:20.246004 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" event={"ID":"7cd7afa6-8208-47e3-b598-0f2e8578dc3f","Type":"ContainerStarted","Data":"ecfa64c3ca56d47680795e32ed8b2b47ecb261712c98f9a5cb6e393160abbe6d"} Jan 07 04:15:20 crc kubenswrapper[4980]: I0107 04:15:20.246462 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" event={"ID":"7cd7afa6-8208-47e3-b598-0f2e8578dc3f","Type":"ContainerStarted","Data":"57c8a68e0acf8840081f3f549bfbef5faf629992e0c37e1e97942acff4c284a7"} Jan 07 04:15:20 crc kubenswrapper[4980]: I0107 04:15:20.271599 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" podStartSLOduration=1.650289611 podStartE2EDuration="2.271576178s" podCreationTimestamp="2026-01-07 04:15:18 +0000 UTC" firstStartedPulling="2026-01-07 04:15:19.275810628 +0000 UTC m=+2565.841505363" lastFinishedPulling="2026-01-07 04:15:19.897097195 +0000 UTC m=+2566.462791930" observedRunningTime="2026-01-07 04:15:20.270274448 +0000 UTC m=+2566.835969223" watchObservedRunningTime="2026-01-07 04:15:20.271576178 +0000 UTC m=+2566.837270953" Jan 07 04:15:31 crc kubenswrapper[4980]: I0107 04:15:31.677601 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fwgng"] Jan 07 04:15:31 crc kubenswrapper[4980]: I0107 04:15:31.681797 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwgng" Jan 07 04:15:31 crc kubenswrapper[4980]: I0107 04:15:31.692319 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwgng"] Jan 07 04:15:31 crc kubenswrapper[4980]: I0107 04:15:31.816014 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4164bd91-b646-4ebe-9d8d-31de204e19f6-utilities\") pod \"redhat-marketplace-fwgng\" (UID: \"4164bd91-b646-4ebe-9d8d-31de204e19f6\") " pod="openshift-marketplace/redhat-marketplace-fwgng" Jan 07 04:15:31 crc kubenswrapper[4980]: I0107 04:15:31.816063 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkmt6\" (UniqueName: \"kubernetes.io/projected/4164bd91-b646-4ebe-9d8d-31de204e19f6-kube-api-access-wkmt6\") pod \"redhat-marketplace-fwgng\" (UID: \"4164bd91-b646-4ebe-9d8d-31de204e19f6\") " pod="openshift-marketplace/redhat-marketplace-fwgng" Jan 07 04:15:31 crc kubenswrapper[4980]: I0107 04:15:31.816093 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4164bd91-b646-4ebe-9d8d-31de204e19f6-catalog-content\") pod \"redhat-marketplace-fwgng\" (UID: \"4164bd91-b646-4ebe-9d8d-31de204e19f6\") " pod="openshift-marketplace/redhat-marketplace-fwgng" Jan 07 04:15:31 crc kubenswrapper[4980]: I0107 04:15:31.919199 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4164bd91-b646-4ebe-9d8d-31de204e19f6-utilities\") pod \"redhat-marketplace-fwgng\" (UID: \"4164bd91-b646-4ebe-9d8d-31de204e19f6\") " pod="openshift-marketplace/redhat-marketplace-fwgng" Jan 07 04:15:31 crc kubenswrapper[4980]: I0107 04:15:31.919300 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkmt6\" (UniqueName: \"kubernetes.io/projected/4164bd91-b646-4ebe-9d8d-31de204e19f6-kube-api-access-wkmt6\") pod \"redhat-marketplace-fwgng\" (UID: \"4164bd91-b646-4ebe-9d8d-31de204e19f6\") " pod="openshift-marketplace/redhat-marketplace-fwgng" Jan 07 04:15:31 crc kubenswrapper[4980]: I0107 04:15:31.919471 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4164bd91-b646-4ebe-9d8d-31de204e19f6-catalog-content\") pod \"redhat-marketplace-fwgng\" (UID: \"4164bd91-b646-4ebe-9d8d-31de204e19f6\") " pod="openshift-marketplace/redhat-marketplace-fwgng" Jan 07 04:15:31 crc kubenswrapper[4980]: I0107 04:15:31.920297 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4164bd91-b646-4ebe-9d8d-31de204e19f6-utilities\") pod \"redhat-marketplace-fwgng\" (UID: \"4164bd91-b646-4ebe-9d8d-31de204e19f6\") " pod="openshift-marketplace/redhat-marketplace-fwgng" Jan 07 04:15:31 crc kubenswrapper[4980]: I0107 04:15:31.920316 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4164bd91-b646-4ebe-9d8d-31de204e19f6-catalog-content\") pod \"redhat-marketplace-fwgng\" (UID: \"4164bd91-b646-4ebe-9d8d-31de204e19f6\") " pod="openshift-marketplace/redhat-marketplace-fwgng" Jan 07 04:15:31 crc kubenswrapper[4980]: I0107 04:15:31.954288 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkmt6\" (UniqueName: \"kubernetes.io/projected/4164bd91-b646-4ebe-9d8d-31de204e19f6-kube-api-access-wkmt6\") pod \"redhat-marketplace-fwgng\" (UID: \"4164bd91-b646-4ebe-9d8d-31de204e19f6\") " pod="openshift-marketplace/redhat-marketplace-fwgng" Jan 07 04:15:32 crc kubenswrapper[4980]: I0107 04:15:32.013196 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwgng" Jan 07 04:15:32 crc kubenswrapper[4980]: I0107 04:15:32.493457 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwgng"] Jan 07 04:15:32 crc kubenswrapper[4980]: W0107 04:15:32.502167 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4164bd91_b646_4ebe_9d8d_31de204e19f6.slice/crio-8bd22231e17dae94d1af9f7d5c2ac8f62609c2c4969e9a23664aa93fbfb6b0de WatchSource:0}: Error finding container 8bd22231e17dae94d1af9f7d5c2ac8f62609c2c4969e9a23664aa93fbfb6b0de: Status 404 returned error can't find the container with id 8bd22231e17dae94d1af9f7d5c2ac8f62609c2c4969e9a23664aa93fbfb6b0de Jan 07 04:15:33 crc kubenswrapper[4980]: I0107 04:15:33.403632 4980 generic.go:334] "Generic (PLEG): container finished" podID="4164bd91-b646-4ebe-9d8d-31de204e19f6" containerID="a522fa08b322685869576a5a02769a1086850adba403f0f563668e240068d678" exitCode=0 Jan 07 04:15:33 crc kubenswrapper[4980]: I0107 04:15:33.403743 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwgng" event={"ID":"4164bd91-b646-4ebe-9d8d-31de204e19f6","Type":"ContainerDied","Data":"a522fa08b322685869576a5a02769a1086850adba403f0f563668e240068d678"} Jan 07 04:15:33 crc kubenswrapper[4980]: I0107 04:15:33.404343 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwgng" event={"ID":"4164bd91-b646-4ebe-9d8d-31de204e19f6","Type":"ContainerStarted","Data":"8bd22231e17dae94d1af9f7d5c2ac8f62609c2c4969e9a23664aa93fbfb6b0de"} Jan 07 04:15:33 crc kubenswrapper[4980]: I0107 04:15:33.407155 4980 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 04:15:34 crc kubenswrapper[4980]: I0107 04:15:34.418723 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwgng" event={"ID":"4164bd91-b646-4ebe-9d8d-31de204e19f6","Type":"ContainerStarted","Data":"15f554695755858189e271e6bc93b4e7079c3bf042d77a91d02cc35b38c99285"} Jan 07 04:15:35 crc kubenswrapper[4980]: I0107 04:15:35.431692 4980 generic.go:334] "Generic (PLEG): container finished" podID="4164bd91-b646-4ebe-9d8d-31de204e19f6" containerID="15f554695755858189e271e6bc93b4e7079c3bf042d77a91d02cc35b38c99285" exitCode=0 Jan 07 04:15:35 crc kubenswrapper[4980]: I0107 04:15:35.431812 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwgng" event={"ID":"4164bd91-b646-4ebe-9d8d-31de204e19f6","Type":"ContainerDied","Data":"15f554695755858189e271e6bc93b4e7079c3bf042d77a91d02cc35b38c99285"} Jan 07 04:15:36 crc kubenswrapper[4980]: I0107 04:15:36.445220 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwgng" event={"ID":"4164bd91-b646-4ebe-9d8d-31de204e19f6","Type":"ContainerStarted","Data":"b30ae81d6199197a888ae6b0bfb6861e27e068a551027fa3141f6bd7e3f20180"} Jan 07 04:15:36 crc kubenswrapper[4980]: I0107 04:15:36.486728 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fwgng" podStartSLOduration=2.946676376 podStartE2EDuration="5.486699469s" podCreationTimestamp="2026-01-07 04:15:31 +0000 UTC" firstStartedPulling="2026-01-07 04:15:33.406756295 +0000 UTC m=+2579.972451070" lastFinishedPulling="2026-01-07 04:15:35.946779398 +0000 UTC m=+2582.512474163" observedRunningTime="2026-01-07 04:15:36.472063746 +0000 UTC m=+2583.037758521" watchObservedRunningTime="2026-01-07 04:15:36.486699469 +0000 UTC m=+2583.052394244" Jan 07 04:15:41 crc kubenswrapper[4980]: I0107 04:15:41.990995 4980 scope.go:117] "RemoveContainer" containerID="21b2b1e6fd1c40ca83a8857e70933c9a9dea5971e8af4944cd7932c451885613" Jan 07 04:15:42 crc kubenswrapper[4980]: I0107 04:15:42.014522 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fwgng" Jan 07 04:15:42 crc kubenswrapper[4980]: I0107 04:15:42.014622 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fwgng" Jan 07 04:15:42 crc kubenswrapper[4980]: I0107 04:15:42.091850 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fwgng" Jan 07 04:15:42 crc kubenswrapper[4980]: I0107 04:15:42.607964 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fwgng" Jan 07 04:15:42 crc kubenswrapper[4980]: I0107 04:15:42.683655 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwgng"] Jan 07 04:15:44 crc kubenswrapper[4980]: I0107 04:15:44.540149 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fwgng" podUID="4164bd91-b646-4ebe-9d8d-31de204e19f6" containerName="registry-server" containerID="cri-o://b30ae81d6199197a888ae6b0bfb6861e27e068a551027fa3141f6bd7e3f20180" gracePeriod=2 Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.225231 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwgng" Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.325772 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4164bd91-b646-4ebe-9d8d-31de204e19f6-catalog-content\") pod \"4164bd91-b646-4ebe-9d8d-31de204e19f6\" (UID: \"4164bd91-b646-4ebe-9d8d-31de204e19f6\") " Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.352812 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4164bd91-b646-4ebe-9d8d-31de204e19f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4164bd91-b646-4ebe-9d8d-31de204e19f6" (UID: "4164bd91-b646-4ebe-9d8d-31de204e19f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.428533 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4164bd91-b646-4ebe-9d8d-31de204e19f6-utilities\") pod \"4164bd91-b646-4ebe-9d8d-31de204e19f6\" (UID: \"4164bd91-b646-4ebe-9d8d-31de204e19f6\") " Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.428754 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkmt6\" (UniqueName: \"kubernetes.io/projected/4164bd91-b646-4ebe-9d8d-31de204e19f6-kube-api-access-wkmt6\") pod \"4164bd91-b646-4ebe-9d8d-31de204e19f6\" (UID: \"4164bd91-b646-4ebe-9d8d-31de204e19f6\") " Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.429406 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4164bd91-b646-4ebe-9d8d-31de204e19f6-utilities" (OuterVolumeSpecName: "utilities") pod "4164bd91-b646-4ebe-9d8d-31de204e19f6" (UID: "4164bd91-b646-4ebe-9d8d-31de204e19f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.429530 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4164bd91-b646-4ebe-9d8d-31de204e19f6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.429635 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4164bd91-b646-4ebe-9d8d-31de204e19f6-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.437446 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4164bd91-b646-4ebe-9d8d-31de204e19f6-kube-api-access-wkmt6" (OuterVolumeSpecName: "kube-api-access-wkmt6") pod "4164bd91-b646-4ebe-9d8d-31de204e19f6" (UID: "4164bd91-b646-4ebe-9d8d-31de204e19f6"). InnerVolumeSpecName "kube-api-access-wkmt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.531632 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkmt6\" (UniqueName: \"kubernetes.io/projected/4164bd91-b646-4ebe-9d8d-31de204e19f6-kube-api-access-wkmt6\") on node \"crc\" DevicePath \"\"" Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.554962 4980 generic.go:334] "Generic (PLEG): container finished" podID="4164bd91-b646-4ebe-9d8d-31de204e19f6" containerID="b30ae81d6199197a888ae6b0bfb6861e27e068a551027fa3141f6bd7e3f20180" exitCode=0 Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.555030 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwgng" event={"ID":"4164bd91-b646-4ebe-9d8d-31de204e19f6","Type":"ContainerDied","Data":"b30ae81d6199197a888ae6b0bfb6861e27e068a551027fa3141f6bd7e3f20180"} Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.555073 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwgng" event={"ID":"4164bd91-b646-4ebe-9d8d-31de204e19f6","Type":"ContainerDied","Data":"8bd22231e17dae94d1af9f7d5c2ac8f62609c2c4969e9a23664aa93fbfb6b0de"} Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.555107 4980 scope.go:117] "RemoveContainer" containerID="b30ae81d6199197a888ae6b0bfb6861e27e068a551027fa3141f6bd7e3f20180" Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.555316 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwgng" Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.592987 4980 scope.go:117] "RemoveContainer" containerID="15f554695755858189e271e6bc93b4e7079c3bf042d77a91d02cc35b38c99285" Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.614820 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwgng"] Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.624718 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwgng"] Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.631318 4980 scope.go:117] "RemoveContainer" containerID="a522fa08b322685869576a5a02769a1086850adba403f0f563668e240068d678" Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.678252 4980 scope.go:117] "RemoveContainer" containerID="b30ae81d6199197a888ae6b0bfb6861e27e068a551027fa3141f6bd7e3f20180" Jan 07 04:15:45 crc kubenswrapper[4980]: E0107 04:15:45.678996 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b30ae81d6199197a888ae6b0bfb6861e27e068a551027fa3141f6bd7e3f20180\": container with ID starting with b30ae81d6199197a888ae6b0bfb6861e27e068a551027fa3141f6bd7e3f20180 not found: ID does not exist" containerID="b30ae81d6199197a888ae6b0bfb6861e27e068a551027fa3141f6bd7e3f20180" Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.679059 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b30ae81d6199197a888ae6b0bfb6861e27e068a551027fa3141f6bd7e3f20180"} err="failed to get container status \"b30ae81d6199197a888ae6b0bfb6861e27e068a551027fa3141f6bd7e3f20180\": rpc error: code = NotFound desc = could not find container \"b30ae81d6199197a888ae6b0bfb6861e27e068a551027fa3141f6bd7e3f20180\": container with ID starting with b30ae81d6199197a888ae6b0bfb6861e27e068a551027fa3141f6bd7e3f20180 not found: ID does not exist" Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.679090 4980 scope.go:117] "RemoveContainer" containerID="15f554695755858189e271e6bc93b4e7079c3bf042d77a91d02cc35b38c99285" Jan 07 04:15:45 crc kubenswrapper[4980]: E0107 04:15:45.679472 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15f554695755858189e271e6bc93b4e7079c3bf042d77a91d02cc35b38c99285\": container with ID starting with 15f554695755858189e271e6bc93b4e7079c3bf042d77a91d02cc35b38c99285 not found: ID does not exist" containerID="15f554695755858189e271e6bc93b4e7079c3bf042d77a91d02cc35b38c99285" Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.679501 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f554695755858189e271e6bc93b4e7079c3bf042d77a91d02cc35b38c99285"} err="failed to get container status \"15f554695755858189e271e6bc93b4e7079c3bf042d77a91d02cc35b38c99285\": rpc error: code = NotFound desc = could not find container \"15f554695755858189e271e6bc93b4e7079c3bf042d77a91d02cc35b38c99285\": container with ID starting with 15f554695755858189e271e6bc93b4e7079c3bf042d77a91d02cc35b38c99285 not found: ID does not exist" Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.679522 4980 scope.go:117] "RemoveContainer" containerID="a522fa08b322685869576a5a02769a1086850adba403f0f563668e240068d678" Jan 07 04:15:45 crc kubenswrapper[4980]: E0107 04:15:45.679892 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a522fa08b322685869576a5a02769a1086850adba403f0f563668e240068d678\": container with ID starting with a522fa08b322685869576a5a02769a1086850adba403f0f563668e240068d678 not found: ID does not exist" containerID="a522fa08b322685869576a5a02769a1086850adba403f0f563668e240068d678" Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.679934 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a522fa08b322685869576a5a02769a1086850adba403f0f563668e240068d678"} err="failed to get container status \"a522fa08b322685869576a5a02769a1086850adba403f0f563668e240068d678\": rpc error: code = NotFound desc = could not find container \"a522fa08b322685869576a5a02769a1086850adba403f0f563668e240068d678\": container with ID starting with a522fa08b322685869576a5a02769a1086850adba403f0f563668e240068d678 not found: ID does not exist" Jan 07 04:15:45 crc kubenswrapper[4980]: I0107 04:15:45.747190 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4164bd91-b646-4ebe-9d8d-31de204e19f6" path="/var/lib/kubelet/pods/4164bd91-b646-4ebe-9d8d-31de204e19f6/volumes" Jan 07 04:16:42 crc kubenswrapper[4980]: I0107 04:16:42.074407 4980 scope.go:117] "RemoveContainer" containerID="d630f9aa4e75a6543802e1b1030846357eeae35708f045b0c0280df646e6c9e4" Jan 07 04:16:42 crc kubenswrapper[4980]: I0107 04:16:42.110895 4980 scope.go:117] "RemoveContainer" containerID="34c504237345e10c774fc55f95cb3c5e85200bc6ccbac5efe6c4941fd59f07fc" Jan 07 04:16:42 crc kubenswrapper[4980]: I0107 04:16:42.171477 4980 scope.go:117] "RemoveContainer" containerID="ee9d7fec32aa1b7c9921c7989be2be4d1e25698cf0e4f2389bad409428a03fc6" Jan 07 04:17:06 crc kubenswrapper[4980]: I0107 04:17:06.543124 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:17:06 crc kubenswrapper[4980]: I0107 04:17:06.543846 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:17:36 crc kubenswrapper[4980]: I0107 04:17:36.543275 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:17:36 crc kubenswrapper[4980]: I0107 04:17:36.543962 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:18:06 crc kubenswrapper[4980]: I0107 04:18:06.542816 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:18:06 crc kubenswrapper[4980]: I0107 04:18:06.543517 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:18:06 crc kubenswrapper[4980]: I0107 04:18:06.543605 4980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 04:18:06 crc kubenswrapper[4980]: I0107 04:18:06.544644 4980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09c860ca894a226a129a0988a78a99e28eb4b51d74ecc7cd9b27a6746346462b"} pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 04:18:06 crc kubenswrapper[4980]: I0107 04:18:06.544764 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" containerID="cri-o://09c860ca894a226a129a0988a78a99e28eb4b51d74ecc7cd9b27a6746346462b" gracePeriod=600 Jan 07 04:18:07 crc kubenswrapper[4980]: I0107 04:18:07.200071 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerID="09c860ca894a226a129a0988a78a99e28eb4b51d74ecc7cd9b27a6746346462b" exitCode=0 Jan 07 04:18:07 crc kubenswrapper[4980]: I0107 04:18:07.200148 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerDied","Data":"09c860ca894a226a129a0988a78a99e28eb4b51d74ecc7cd9b27a6746346462b"} Jan 07 04:18:07 crc kubenswrapper[4980]: I0107 04:18:07.200785 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8"} Jan 07 04:18:07 crc kubenswrapper[4980]: I0107 04:18:07.200819 4980 scope.go:117] "RemoveContainer" containerID="d33444ede5fd5aeb2f6009f905f81c1a51ceba5bb923e70f8a7354ee7496eb2e" Jan 07 04:18:11 crc kubenswrapper[4980]: I0107 04:18:11.318544 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8nv82"] Jan 07 04:18:11 crc kubenswrapper[4980]: E0107 04:18:11.319974 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4164bd91-b646-4ebe-9d8d-31de204e19f6" containerName="extract-utilities" Jan 07 04:18:11 crc kubenswrapper[4980]: I0107 04:18:11.319999 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="4164bd91-b646-4ebe-9d8d-31de204e19f6" containerName="extract-utilities" Jan 07 04:18:11 crc kubenswrapper[4980]: E0107 04:18:11.320075 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4164bd91-b646-4ebe-9d8d-31de204e19f6" containerName="extract-content" Jan 07 04:18:11 crc kubenswrapper[4980]: I0107 04:18:11.320092 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="4164bd91-b646-4ebe-9d8d-31de204e19f6" containerName="extract-content" Jan 07 04:18:11 crc kubenswrapper[4980]: E0107 04:18:11.320118 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4164bd91-b646-4ebe-9d8d-31de204e19f6" containerName="registry-server" Jan 07 04:18:11 crc kubenswrapper[4980]: I0107 04:18:11.320169 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="4164bd91-b646-4ebe-9d8d-31de204e19f6" containerName="registry-server" Jan 07 04:18:11 crc kubenswrapper[4980]: I0107 04:18:11.320738 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="4164bd91-b646-4ebe-9d8d-31de204e19f6" containerName="registry-server" Jan 07 04:18:11 crc kubenswrapper[4980]: I0107 04:18:11.323929 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8nv82" Jan 07 04:18:11 crc kubenswrapper[4980]: I0107 04:18:11.348088 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8nv82"] Jan 07 04:18:11 crc kubenswrapper[4980]: I0107 04:18:11.372247 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc1a66c5-cb23-47b2-ac9a-036a05d845fd-catalog-content\") pod \"redhat-operators-8nv82\" (UID: \"bc1a66c5-cb23-47b2-ac9a-036a05d845fd\") " pod="openshift-marketplace/redhat-operators-8nv82" Jan 07 04:18:11 crc kubenswrapper[4980]: I0107 04:18:11.372674 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmhgj\" (UniqueName: \"kubernetes.io/projected/bc1a66c5-cb23-47b2-ac9a-036a05d845fd-kube-api-access-kmhgj\") pod \"redhat-operators-8nv82\" (UID: \"bc1a66c5-cb23-47b2-ac9a-036a05d845fd\") " pod="openshift-marketplace/redhat-operators-8nv82" Jan 07 04:18:11 crc kubenswrapper[4980]: I0107 04:18:11.373504 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc1a66c5-cb23-47b2-ac9a-036a05d845fd-utilities\") pod \"redhat-operators-8nv82\" (UID: \"bc1a66c5-cb23-47b2-ac9a-036a05d845fd\") " pod="openshift-marketplace/redhat-operators-8nv82" Jan 07 04:18:11 crc kubenswrapper[4980]: I0107 04:18:11.475959 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc1a66c5-cb23-47b2-ac9a-036a05d845fd-catalog-content\") pod \"redhat-operators-8nv82\" (UID: \"bc1a66c5-cb23-47b2-ac9a-036a05d845fd\") " pod="openshift-marketplace/redhat-operators-8nv82" Jan 07 04:18:11 crc kubenswrapper[4980]: I0107 04:18:11.476035 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmhgj\" (UniqueName: \"kubernetes.io/projected/bc1a66c5-cb23-47b2-ac9a-036a05d845fd-kube-api-access-kmhgj\") pod \"redhat-operators-8nv82\" (UID: \"bc1a66c5-cb23-47b2-ac9a-036a05d845fd\") " pod="openshift-marketplace/redhat-operators-8nv82" Jan 07 04:18:11 crc kubenswrapper[4980]: I0107 04:18:11.476145 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc1a66c5-cb23-47b2-ac9a-036a05d845fd-utilities\") pod \"redhat-operators-8nv82\" (UID: \"bc1a66c5-cb23-47b2-ac9a-036a05d845fd\") " pod="openshift-marketplace/redhat-operators-8nv82" Jan 07 04:18:11 crc kubenswrapper[4980]: I0107 04:18:11.476490 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc1a66c5-cb23-47b2-ac9a-036a05d845fd-catalog-content\") pod \"redhat-operators-8nv82\" (UID: \"bc1a66c5-cb23-47b2-ac9a-036a05d845fd\") " pod="openshift-marketplace/redhat-operators-8nv82" Jan 07 04:18:11 crc kubenswrapper[4980]: I0107 04:18:11.476665 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc1a66c5-cb23-47b2-ac9a-036a05d845fd-utilities\") pod \"redhat-operators-8nv82\" (UID: \"bc1a66c5-cb23-47b2-ac9a-036a05d845fd\") " pod="openshift-marketplace/redhat-operators-8nv82" Jan 07 04:18:11 crc kubenswrapper[4980]: I0107 04:18:11.500642 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmhgj\" (UniqueName: \"kubernetes.io/projected/bc1a66c5-cb23-47b2-ac9a-036a05d845fd-kube-api-access-kmhgj\") pod \"redhat-operators-8nv82\" (UID: \"bc1a66c5-cb23-47b2-ac9a-036a05d845fd\") " pod="openshift-marketplace/redhat-operators-8nv82" Jan 07 04:18:11 crc kubenswrapper[4980]: I0107 04:18:11.662117 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8nv82" Jan 07 04:18:12 crc kubenswrapper[4980]: I0107 04:18:12.122139 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8nv82"] Jan 07 04:18:12 crc kubenswrapper[4980]: I0107 04:18:12.290698 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nv82" event={"ID":"bc1a66c5-cb23-47b2-ac9a-036a05d845fd","Type":"ContainerStarted","Data":"630f69156a43c7cc7ce44510cca8cc84b1b2cb49c019c010aeaa1b63638528c5"} Jan 07 04:18:13 crc kubenswrapper[4980]: I0107 04:18:13.303268 4980 generic.go:334] "Generic (PLEG): container finished" podID="bc1a66c5-cb23-47b2-ac9a-036a05d845fd" containerID="42686c9fd64cb1ed858f959350301a1ada05087d5d4ce7deeb4a6938428ea8c8" exitCode=0 Jan 07 04:18:13 crc kubenswrapper[4980]: I0107 04:18:13.303320 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nv82" event={"ID":"bc1a66c5-cb23-47b2-ac9a-036a05d845fd","Type":"ContainerDied","Data":"42686c9fd64cb1ed858f959350301a1ada05087d5d4ce7deeb4a6938428ea8c8"} Jan 07 04:18:15 crc kubenswrapper[4980]: I0107 04:18:15.327135 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nv82" event={"ID":"bc1a66c5-cb23-47b2-ac9a-036a05d845fd","Type":"ContainerStarted","Data":"afa57c5df615cbac43a2c41c5c1672a5740adb50c556e398f613b6b06634d65d"} Jan 07 04:18:17 crc kubenswrapper[4980]: I0107 04:18:17.355103 4980 generic.go:334] "Generic (PLEG): container finished" podID="bc1a66c5-cb23-47b2-ac9a-036a05d845fd" containerID="afa57c5df615cbac43a2c41c5c1672a5740adb50c556e398f613b6b06634d65d" exitCode=0 Jan 07 04:18:17 crc kubenswrapper[4980]: I0107 04:18:17.355192 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nv82" event={"ID":"bc1a66c5-cb23-47b2-ac9a-036a05d845fd","Type":"ContainerDied","Data":"afa57c5df615cbac43a2c41c5c1672a5740adb50c556e398f613b6b06634d65d"} Jan 07 04:18:18 crc kubenswrapper[4980]: I0107 04:18:18.367313 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nv82" event={"ID":"bc1a66c5-cb23-47b2-ac9a-036a05d845fd","Type":"ContainerStarted","Data":"526d3e74e1098d7fe7364f073ed3d164da453295fe347926d374d3dc2e7f8493"} Jan 07 04:18:18 crc kubenswrapper[4980]: I0107 04:18:18.394757 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8nv82" podStartSLOduration=2.591519394 podStartE2EDuration="7.394734717s" podCreationTimestamp="2026-01-07 04:18:11 +0000 UTC" firstStartedPulling="2026-01-07 04:18:13.305857499 +0000 UTC m=+2739.871552234" lastFinishedPulling="2026-01-07 04:18:18.109072822 +0000 UTC m=+2744.674767557" observedRunningTime="2026-01-07 04:18:18.393804769 +0000 UTC m=+2744.959499534" watchObservedRunningTime="2026-01-07 04:18:18.394734717 +0000 UTC m=+2744.960429452" Jan 07 04:18:21 crc kubenswrapper[4980]: I0107 04:18:21.663264 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8nv82" Jan 07 04:18:21 crc kubenswrapper[4980]: I0107 04:18:21.663829 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8nv82" Jan 07 04:18:22 crc kubenswrapper[4980]: I0107 04:18:22.411103 4980 generic.go:334] "Generic (PLEG): container finished" podID="7cd7afa6-8208-47e3-b598-0f2e8578dc3f" containerID="ecfa64c3ca56d47680795e32ed8b2b47ecb261712c98f9a5cb6e393160abbe6d" exitCode=0 Jan 07 04:18:22 crc kubenswrapper[4980]: I0107 04:18:22.411334 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" event={"ID":"7cd7afa6-8208-47e3-b598-0f2e8578dc3f","Type":"ContainerDied","Data":"ecfa64c3ca56d47680795e32ed8b2b47ecb261712c98f9a5cb6e393160abbe6d"} Jan 07 04:18:22 crc kubenswrapper[4980]: I0107 04:18:22.736021 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8nv82" podUID="bc1a66c5-cb23-47b2-ac9a-036a05d845fd" containerName="registry-server" probeResult="failure" output=< Jan 07 04:18:22 crc kubenswrapper[4980]: timeout: failed to connect service ":50051" within 1s Jan 07 04:18:22 crc kubenswrapper[4980]: > Jan 07 04:18:23 crc kubenswrapper[4980]: I0107 04:18:23.911926 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.015078 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ceilometer-compute-config-data-1\") pod \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.015114 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ceilometer-compute-config-data-2\") pod \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.015276 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ceilometer-compute-config-data-0\") pod \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.015320 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-286ld\" (UniqueName: \"kubernetes.io/projected/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-kube-api-access-286ld\") pod \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.015349 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-telemetry-combined-ca-bundle\") pod \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.015373 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-inventory\") pod \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.015408 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ssh-key-openstack-edpm-ipam\") pod \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\" (UID: \"7cd7afa6-8208-47e3-b598-0f2e8578dc3f\") " Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.029715 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-kube-api-access-286ld" (OuterVolumeSpecName: "kube-api-access-286ld") pod "7cd7afa6-8208-47e3-b598-0f2e8578dc3f" (UID: "7cd7afa6-8208-47e3-b598-0f2e8578dc3f"). InnerVolumeSpecName "kube-api-access-286ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.053789 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7cd7afa6-8208-47e3-b598-0f2e8578dc3f" (UID: "7cd7afa6-8208-47e3-b598-0f2e8578dc3f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.077285 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-inventory" (OuterVolumeSpecName: "inventory") pod "7cd7afa6-8208-47e3-b598-0f2e8578dc3f" (UID: "7cd7afa6-8208-47e3-b598-0f2e8578dc3f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.079416 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "7cd7afa6-8208-47e3-b598-0f2e8578dc3f" (UID: "7cd7afa6-8208-47e3-b598-0f2e8578dc3f"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.084891 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "7cd7afa6-8208-47e3-b598-0f2e8578dc3f" (UID: "7cd7afa6-8208-47e3-b598-0f2e8578dc3f"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.085527 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7cd7afa6-8208-47e3-b598-0f2e8578dc3f" (UID: "7cd7afa6-8208-47e3-b598-0f2e8578dc3f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.097909 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "7cd7afa6-8208-47e3-b598-0f2e8578dc3f" (UID: "7cd7afa6-8208-47e3-b598-0f2e8578dc3f"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.117707 4980 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-inventory\") on node \"crc\" DevicePath \"\"" Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.117830 4980 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.117889 4980 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.117957 4980 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.118013 4980 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.118086 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-286ld\" (UniqueName: \"kubernetes.io/projected/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-kube-api-access-286ld\") on node \"crc\" DevicePath \"\"" Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.118148 4980 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd7afa6-8208-47e3-b598-0f2e8578dc3f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.436462 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" event={"ID":"7cd7afa6-8208-47e3-b598-0f2e8578dc3f","Type":"ContainerDied","Data":"57c8a68e0acf8840081f3f549bfbef5faf629992e0c37e1e97942acff4c284a7"} Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.436805 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57c8a68e0acf8840081f3f549bfbef5faf629992e0c37e1e97942acff4c284a7" Jan 07 04:18:24 crc kubenswrapper[4980]: I0107 04:18:24.436585 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb" Jan 07 04:18:31 crc kubenswrapper[4980]: I0107 04:18:31.758942 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8nv82" Jan 07 04:18:31 crc kubenswrapper[4980]: I0107 04:18:31.843470 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8nv82" Jan 07 04:18:32 crc kubenswrapper[4980]: I0107 04:18:32.035697 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8nv82"] Jan 07 04:18:33 crc kubenswrapper[4980]: I0107 04:18:33.539230 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8nv82" podUID="bc1a66c5-cb23-47b2-ac9a-036a05d845fd" containerName="registry-server" containerID="cri-o://526d3e74e1098d7fe7364f073ed3d164da453295fe347926d374d3dc2e7f8493" gracePeriod=2 Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.038806 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8nv82" Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.167798 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc1a66c5-cb23-47b2-ac9a-036a05d845fd-utilities\") pod \"bc1a66c5-cb23-47b2-ac9a-036a05d845fd\" (UID: \"bc1a66c5-cb23-47b2-ac9a-036a05d845fd\") " Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.167911 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc1a66c5-cb23-47b2-ac9a-036a05d845fd-catalog-content\") pod \"bc1a66c5-cb23-47b2-ac9a-036a05d845fd\" (UID: \"bc1a66c5-cb23-47b2-ac9a-036a05d845fd\") " Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.167987 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmhgj\" (UniqueName: \"kubernetes.io/projected/bc1a66c5-cb23-47b2-ac9a-036a05d845fd-kube-api-access-kmhgj\") pod \"bc1a66c5-cb23-47b2-ac9a-036a05d845fd\" (UID: \"bc1a66c5-cb23-47b2-ac9a-036a05d845fd\") " Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.169054 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc1a66c5-cb23-47b2-ac9a-036a05d845fd-utilities" (OuterVolumeSpecName: "utilities") pod "bc1a66c5-cb23-47b2-ac9a-036a05d845fd" (UID: "bc1a66c5-cb23-47b2-ac9a-036a05d845fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.176742 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc1a66c5-cb23-47b2-ac9a-036a05d845fd-kube-api-access-kmhgj" (OuterVolumeSpecName: "kube-api-access-kmhgj") pod "bc1a66c5-cb23-47b2-ac9a-036a05d845fd" (UID: "bc1a66c5-cb23-47b2-ac9a-036a05d845fd"). InnerVolumeSpecName "kube-api-access-kmhgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.271066 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc1a66c5-cb23-47b2-ac9a-036a05d845fd-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.271120 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmhgj\" (UniqueName: \"kubernetes.io/projected/bc1a66c5-cb23-47b2-ac9a-036a05d845fd-kube-api-access-kmhgj\") on node \"crc\" DevicePath \"\"" Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.317638 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc1a66c5-cb23-47b2-ac9a-036a05d845fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc1a66c5-cb23-47b2-ac9a-036a05d845fd" (UID: "bc1a66c5-cb23-47b2-ac9a-036a05d845fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.373701 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc1a66c5-cb23-47b2-ac9a-036a05d845fd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.554348 4980 generic.go:334] "Generic (PLEG): container finished" podID="bc1a66c5-cb23-47b2-ac9a-036a05d845fd" containerID="526d3e74e1098d7fe7364f073ed3d164da453295fe347926d374d3dc2e7f8493" exitCode=0 Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.554421 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nv82" event={"ID":"bc1a66c5-cb23-47b2-ac9a-036a05d845fd","Type":"ContainerDied","Data":"526d3e74e1098d7fe7364f073ed3d164da453295fe347926d374d3dc2e7f8493"} Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.554464 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nv82" event={"ID":"bc1a66c5-cb23-47b2-ac9a-036a05d845fd","Type":"ContainerDied","Data":"630f69156a43c7cc7ce44510cca8cc84b1b2cb49c019c010aeaa1b63638528c5"} Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.554494 4980 scope.go:117] "RemoveContainer" containerID="526d3e74e1098d7fe7364f073ed3d164da453295fe347926d374d3dc2e7f8493" Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.554720 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8nv82" Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.595318 4980 scope.go:117] "RemoveContainer" containerID="afa57c5df615cbac43a2c41c5c1672a5740adb50c556e398f613b6b06634d65d" Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.615500 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8nv82"] Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.623179 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8nv82"] Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.630928 4980 scope.go:117] "RemoveContainer" containerID="42686c9fd64cb1ed858f959350301a1ada05087d5d4ce7deeb4a6938428ea8c8" Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.664907 4980 scope.go:117] "RemoveContainer" containerID="526d3e74e1098d7fe7364f073ed3d164da453295fe347926d374d3dc2e7f8493" Jan 07 04:18:34 crc kubenswrapper[4980]: E0107 04:18:34.665891 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"526d3e74e1098d7fe7364f073ed3d164da453295fe347926d374d3dc2e7f8493\": container with ID starting with 526d3e74e1098d7fe7364f073ed3d164da453295fe347926d374d3dc2e7f8493 not found: ID does not exist" containerID="526d3e74e1098d7fe7364f073ed3d164da453295fe347926d374d3dc2e7f8493" Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.665929 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"526d3e74e1098d7fe7364f073ed3d164da453295fe347926d374d3dc2e7f8493"} err="failed to get container status \"526d3e74e1098d7fe7364f073ed3d164da453295fe347926d374d3dc2e7f8493\": rpc error: code = NotFound desc = could not find container \"526d3e74e1098d7fe7364f073ed3d164da453295fe347926d374d3dc2e7f8493\": container with ID starting with 526d3e74e1098d7fe7364f073ed3d164da453295fe347926d374d3dc2e7f8493 not found: ID does not exist" Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.665948 4980 scope.go:117] "RemoveContainer" containerID="afa57c5df615cbac43a2c41c5c1672a5740adb50c556e398f613b6b06634d65d" Jan 07 04:18:34 crc kubenswrapper[4980]: E0107 04:18:34.666308 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afa57c5df615cbac43a2c41c5c1672a5740adb50c556e398f613b6b06634d65d\": container with ID starting with afa57c5df615cbac43a2c41c5c1672a5740adb50c556e398f613b6b06634d65d not found: ID does not exist" containerID="afa57c5df615cbac43a2c41c5c1672a5740adb50c556e398f613b6b06634d65d" Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.666332 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afa57c5df615cbac43a2c41c5c1672a5740adb50c556e398f613b6b06634d65d"} err="failed to get container status \"afa57c5df615cbac43a2c41c5c1672a5740adb50c556e398f613b6b06634d65d\": rpc error: code = NotFound desc = could not find container \"afa57c5df615cbac43a2c41c5c1672a5740adb50c556e398f613b6b06634d65d\": container with ID starting with afa57c5df615cbac43a2c41c5c1672a5740adb50c556e398f613b6b06634d65d not found: ID does not exist" Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.666346 4980 scope.go:117] "RemoveContainer" containerID="42686c9fd64cb1ed858f959350301a1ada05087d5d4ce7deeb4a6938428ea8c8" Jan 07 04:18:34 crc kubenswrapper[4980]: E0107 04:18:34.666522 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42686c9fd64cb1ed858f959350301a1ada05087d5d4ce7deeb4a6938428ea8c8\": container with ID starting with 42686c9fd64cb1ed858f959350301a1ada05087d5d4ce7deeb4a6938428ea8c8 not found: ID does not exist" containerID="42686c9fd64cb1ed858f959350301a1ada05087d5d4ce7deeb4a6938428ea8c8" Jan 07 04:18:34 crc kubenswrapper[4980]: I0107 04:18:34.666541 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42686c9fd64cb1ed858f959350301a1ada05087d5d4ce7deeb4a6938428ea8c8"} err="failed to get container status \"42686c9fd64cb1ed858f959350301a1ada05087d5d4ce7deeb4a6938428ea8c8\": rpc error: code = NotFound desc = could not find container \"42686c9fd64cb1ed858f959350301a1ada05087d5d4ce7deeb4a6938428ea8c8\": container with ID starting with 42686c9fd64cb1ed858f959350301a1ada05087d5d4ce7deeb4a6938428ea8c8 not found: ID does not exist" Jan 07 04:18:35 crc kubenswrapper[4980]: I0107 04:18:35.748405 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc1a66c5-cb23-47b2-ac9a-036a05d845fd" path="/var/lib/kubelet/pods/bc1a66c5-cb23-47b2-ac9a-036a05d845fd/volumes" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.208722 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 07 04:19:06 crc kubenswrapper[4980]: E0107 04:19:06.210089 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1a66c5-cb23-47b2-ac9a-036a05d845fd" containerName="extract-content" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.210112 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1a66c5-cb23-47b2-ac9a-036a05d845fd" containerName="extract-content" Jan 07 04:19:06 crc kubenswrapper[4980]: E0107 04:19:06.210151 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1a66c5-cb23-47b2-ac9a-036a05d845fd" containerName="extract-utilities" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.210166 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1a66c5-cb23-47b2-ac9a-036a05d845fd" containerName="extract-utilities" Jan 07 04:19:06 crc kubenswrapper[4980]: E0107 04:19:06.210183 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd7afa6-8208-47e3-b598-0f2e8578dc3f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.210199 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd7afa6-8208-47e3-b598-0f2e8578dc3f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 07 04:19:06 crc kubenswrapper[4980]: E0107 04:19:06.210233 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1a66c5-cb23-47b2-ac9a-036a05d845fd" containerName="registry-server" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.210246 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1a66c5-cb23-47b2-ac9a-036a05d845fd" containerName="registry-server" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.210635 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd7afa6-8208-47e3-b598-0f2e8578dc3f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.210666 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1a66c5-cb23-47b2-ac9a-036a05d845fd" containerName="registry-server" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.211759 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.215367 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.218490 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.218834 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-gpw7m" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.218900 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.232945 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.329726 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptl6f\" (UniqueName: \"kubernetes.io/projected/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-kube-api-access-ptl6f\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.329790 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.329833 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.329940 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.330086 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-config-data\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.330128 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.330387 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.330685 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.330802 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.434460 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-config-data\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.434540 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.434705 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.434889 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.434973 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.435086 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptl6f\" (UniqueName: \"kubernetes.io/projected/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-kube-api-access-ptl6f\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.435133 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.435171 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.435210 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.436312 4980 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.436738 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.436778 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.436990 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.437081 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-config-data\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.442808 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.443780 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.446599 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.457167 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptl6f\" (UniqueName: \"kubernetes.io/projected/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-kube-api-access-ptl6f\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.467763 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " pod="openstack/tempest-tests-tempest" Jan 07 04:19:06 crc kubenswrapper[4980]: I0107 04:19:06.546522 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 07 04:19:07 crc kubenswrapper[4980]: I0107 04:19:07.096410 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 07 04:19:07 crc kubenswrapper[4980]: I0107 04:19:07.945507 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee","Type":"ContainerStarted","Data":"e692e9b993665ac4192823d9769acf1227339a4d431e121330bccf9830743307"} Jan 07 04:19:38 crc kubenswrapper[4980]: E0107 04:19:38.381119 4980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 07 04:19:38 crc kubenswrapper[4980]: E0107 04:19:38.382030 4980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptl6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(4d0a99f6-8fa8-40a7-b994-16a2e287c6ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 07 04:19:38 crc kubenswrapper[4980]: E0107 04:19:38.383694 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="4d0a99f6-8fa8-40a7-b994-16a2e287c6ee" Jan 07 04:19:39 crc kubenswrapper[4980]: E0107 04:19:39.319856 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="4d0a99f6-8fa8-40a7-b994-16a2e287c6ee" Jan 07 04:19:54 crc kubenswrapper[4980]: I0107 04:19:54.216212 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 07 04:19:55 crc kubenswrapper[4980]: I0107 04:19:55.535425 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee","Type":"ContainerStarted","Data":"0a1afb61d45c489e5a5771fbd1cdb94e8833af0092c7953f0dce0ae3ae5d925f"} Jan 07 04:19:55 crc kubenswrapper[4980]: I0107 04:19:55.564487 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.457988634 podStartE2EDuration="50.564468901s" podCreationTimestamp="2026-01-07 04:19:05 +0000 UTC" firstStartedPulling="2026-01-07 04:19:07.106395987 +0000 UTC m=+2793.672090732" lastFinishedPulling="2026-01-07 04:19:54.212876254 +0000 UTC m=+2840.778570999" observedRunningTime="2026-01-07 04:19:55.562158229 +0000 UTC m=+2842.127853004" watchObservedRunningTime="2026-01-07 04:19:55.564468901 +0000 UTC m=+2842.130163646" Jan 07 04:20:06 crc kubenswrapper[4980]: I0107 04:20:06.542947 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:20:06 crc kubenswrapper[4980]: I0107 04:20:06.543920 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:20:36 crc kubenswrapper[4980]: I0107 04:20:36.543472 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:20:36 crc kubenswrapper[4980]: I0107 04:20:36.544540 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:21:06 crc kubenswrapper[4980]: I0107 04:21:06.543297 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:21:06 crc kubenswrapper[4980]: I0107 04:21:06.544049 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:21:06 crc kubenswrapper[4980]: I0107 04:21:06.544114 4980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 04:21:06 crc kubenswrapper[4980]: I0107 04:21:06.545036 4980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8"} pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 04:21:06 crc kubenswrapper[4980]: I0107 04:21:06.545134 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" containerID="cri-o://c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" gracePeriod=600 Jan 07 04:21:06 crc kubenswrapper[4980]: E0107 04:21:06.686804 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:21:07 crc kubenswrapper[4980]: I0107 04:21:07.498406 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" exitCode=0 Jan 07 04:21:07 crc kubenswrapper[4980]: I0107 04:21:07.498472 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerDied","Data":"c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8"} Jan 07 04:21:07 crc kubenswrapper[4980]: I0107 04:21:07.498520 4980 scope.go:117] "RemoveContainer" containerID="09c860ca894a226a129a0988a78a99e28eb4b51d74ecc7cd9b27a6746346462b" Jan 07 04:21:07 crc kubenswrapper[4980]: I0107 04:21:07.499489 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:21:07 crc kubenswrapper[4980]: E0107 04:21:07.500117 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:21:17 crc kubenswrapper[4980]: I0107 04:21:17.735936 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:21:17 crc kubenswrapper[4980]: E0107 04:21:17.737266 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.440031 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lllsq"] Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.442315 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lllsq" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.461741 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lllsq"] Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.549574 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1350e91-a11f-4116-a6f0-47173d84fbb9-catalog-content\") pod \"certified-operators-lllsq\" (UID: \"c1350e91-a11f-4116-a6f0-47173d84fbb9\") " pod="openshift-marketplace/certified-operators-lllsq" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.549861 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1350e91-a11f-4116-a6f0-47173d84fbb9-utilities\") pod \"certified-operators-lllsq\" (UID: \"c1350e91-a11f-4116-a6f0-47173d84fbb9\") " pod="openshift-marketplace/certified-operators-lllsq" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.549999 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ljls\" (UniqueName: \"kubernetes.io/projected/c1350e91-a11f-4116-a6f0-47173d84fbb9-kube-api-access-8ljls\") pod \"certified-operators-lllsq\" (UID: \"c1350e91-a11f-4116-a6f0-47173d84fbb9\") " pod="openshift-marketplace/certified-operators-lllsq" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.639030 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s9jkr"] Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.641638 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9jkr" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.652254 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1350e91-a11f-4116-a6f0-47173d84fbb9-utilities\") pod \"certified-operators-lllsq\" (UID: \"c1350e91-a11f-4116-a6f0-47173d84fbb9\") " pod="openshift-marketplace/certified-operators-lllsq" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.652355 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ljls\" (UniqueName: \"kubernetes.io/projected/c1350e91-a11f-4116-a6f0-47173d84fbb9-kube-api-access-8ljls\") pod \"certified-operators-lllsq\" (UID: \"c1350e91-a11f-4116-a6f0-47173d84fbb9\") " pod="openshift-marketplace/certified-operators-lllsq" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.652466 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1350e91-a11f-4116-a6f0-47173d84fbb9-catalog-content\") pod \"certified-operators-lllsq\" (UID: \"c1350e91-a11f-4116-a6f0-47173d84fbb9\") " pod="openshift-marketplace/certified-operators-lllsq" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.653056 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1350e91-a11f-4116-a6f0-47173d84fbb9-catalog-content\") pod \"certified-operators-lllsq\" (UID: \"c1350e91-a11f-4116-a6f0-47173d84fbb9\") " pod="openshift-marketplace/certified-operators-lllsq" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.653323 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1350e91-a11f-4116-a6f0-47173d84fbb9-utilities\") pod \"certified-operators-lllsq\" (UID: \"c1350e91-a11f-4116-a6f0-47173d84fbb9\") " pod="openshift-marketplace/certified-operators-lllsq" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.661203 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s9jkr"] Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.693990 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ljls\" (UniqueName: \"kubernetes.io/projected/c1350e91-a11f-4116-a6f0-47173d84fbb9-kube-api-access-8ljls\") pod \"certified-operators-lllsq\" (UID: \"c1350e91-a11f-4116-a6f0-47173d84fbb9\") " pod="openshift-marketplace/certified-operators-lllsq" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.754362 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqknp\" (UniqueName: \"kubernetes.io/projected/4b80cf65-5f11-4557-aaff-5fd609f41fbd-kube-api-access-vqknp\") pod \"community-operators-s9jkr\" (UID: \"4b80cf65-5f11-4557-aaff-5fd609f41fbd\") " pod="openshift-marketplace/community-operators-s9jkr" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.754417 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b80cf65-5f11-4557-aaff-5fd609f41fbd-catalog-content\") pod \"community-operators-s9jkr\" (UID: \"4b80cf65-5f11-4557-aaff-5fd609f41fbd\") " pod="openshift-marketplace/community-operators-s9jkr" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.754494 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b80cf65-5f11-4557-aaff-5fd609f41fbd-utilities\") pod \"community-operators-s9jkr\" (UID: \"4b80cf65-5f11-4557-aaff-5fd609f41fbd\") " pod="openshift-marketplace/community-operators-s9jkr" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.816081 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lllsq" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.856403 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqknp\" (UniqueName: \"kubernetes.io/projected/4b80cf65-5f11-4557-aaff-5fd609f41fbd-kube-api-access-vqknp\") pod \"community-operators-s9jkr\" (UID: \"4b80cf65-5f11-4557-aaff-5fd609f41fbd\") " pod="openshift-marketplace/community-operators-s9jkr" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.856891 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b80cf65-5f11-4557-aaff-5fd609f41fbd-catalog-content\") pod \"community-operators-s9jkr\" (UID: \"4b80cf65-5f11-4557-aaff-5fd609f41fbd\") " pod="openshift-marketplace/community-operators-s9jkr" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.857369 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b80cf65-5f11-4557-aaff-5fd609f41fbd-catalog-content\") pod \"community-operators-s9jkr\" (UID: \"4b80cf65-5f11-4557-aaff-5fd609f41fbd\") " pod="openshift-marketplace/community-operators-s9jkr" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.858071 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b80cf65-5f11-4557-aaff-5fd609f41fbd-utilities\") pod \"community-operators-s9jkr\" (UID: \"4b80cf65-5f11-4557-aaff-5fd609f41fbd\") " pod="openshift-marketplace/community-operators-s9jkr" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.859893 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b80cf65-5f11-4557-aaff-5fd609f41fbd-utilities\") pod \"community-operators-s9jkr\" (UID: \"4b80cf65-5f11-4557-aaff-5fd609f41fbd\") " pod="openshift-marketplace/community-operators-s9jkr" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.882415 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqknp\" (UniqueName: \"kubernetes.io/projected/4b80cf65-5f11-4557-aaff-5fd609f41fbd-kube-api-access-vqknp\") pod \"community-operators-s9jkr\" (UID: \"4b80cf65-5f11-4557-aaff-5fd609f41fbd\") " pod="openshift-marketplace/community-operators-s9jkr" Jan 07 04:21:18 crc kubenswrapper[4980]: I0107 04:21:18.968183 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9jkr" Jan 07 04:21:19 crc kubenswrapper[4980]: I0107 04:21:19.364941 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lllsq"] Jan 07 04:21:19 crc kubenswrapper[4980]: I0107 04:21:19.537594 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s9jkr"] Jan 07 04:21:19 crc kubenswrapper[4980]: I0107 04:21:19.647110 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9jkr" event={"ID":"4b80cf65-5f11-4557-aaff-5fd609f41fbd","Type":"ContainerStarted","Data":"1f73e354a8b8ac2725c265d8764d4f3f1902637774ea4a16dd87ebe357d2019f"} Jan 07 04:21:19 crc kubenswrapper[4980]: I0107 04:21:19.649536 4980 generic.go:334] "Generic (PLEG): container finished" podID="c1350e91-a11f-4116-a6f0-47173d84fbb9" containerID="e22a539b02e11258db65bea2b19e175c137005b2de4e5e9d919daed9d819bd2f" exitCode=0 Jan 07 04:21:19 crc kubenswrapper[4980]: I0107 04:21:19.649591 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lllsq" event={"ID":"c1350e91-a11f-4116-a6f0-47173d84fbb9","Type":"ContainerDied","Data":"e22a539b02e11258db65bea2b19e175c137005b2de4e5e9d919daed9d819bd2f"} Jan 07 04:21:19 crc kubenswrapper[4980]: I0107 04:21:19.649610 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lllsq" event={"ID":"c1350e91-a11f-4116-a6f0-47173d84fbb9","Type":"ContainerStarted","Data":"2a2e56440f57c2aca702a84228bce82f55ff8a26dc2449277978d70b3144eb54"} Jan 07 04:21:19 crc kubenswrapper[4980]: I0107 04:21:19.652350 4980 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 04:21:20 crc kubenswrapper[4980]: I0107 04:21:20.663086 4980 generic.go:334] "Generic (PLEG): container finished" podID="4b80cf65-5f11-4557-aaff-5fd609f41fbd" containerID="8990ae7707c6daf98bc0c8457f71eb8424e54e6755f97f15e1c2089131861b04" exitCode=0 Jan 07 04:21:20 crc kubenswrapper[4980]: I0107 04:21:20.663222 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9jkr" event={"ID":"4b80cf65-5f11-4557-aaff-5fd609f41fbd","Type":"ContainerDied","Data":"8990ae7707c6daf98bc0c8457f71eb8424e54e6755f97f15e1c2089131861b04"} Jan 07 04:21:20 crc kubenswrapper[4980]: I0107 04:21:20.670436 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lllsq" event={"ID":"c1350e91-a11f-4116-a6f0-47173d84fbb9","Type":"ContainerStarted","Data":"f61372947ab247fd952e3ab1cb51e589fd653076549813d7c3b5b537d5d90955"} Jan 07 04:21:21 crc kubenswrapper[4980]: I0107 04:21:21.690744 4980 generic.go:334] "Generic (PLEG): container finished" podID="c1350e91-a11f-4116-a6f0-47173d84fbb9" containerID="f61372947ab247fd952e3ab1cb51e589fd653076549813d7c3b5b537d5d90955" exitCode=0 Jan 07 04:21:21 crc kubenswrapper[4980]: I0107 04:21:21.691056 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lllsq" event={"ID":"c1350e91-a11f-4116-a6f0-47173d84fbb9","Type":"ContainerDied","Data":"f61372947ab247fd952e3ab1cb51e589fd653076549813d7c3b5b537d5d90955"} Jan 07 04:21:21 crc kubenswrapper[4980]: I0107 04:21:21.697405 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9jkr" event={"ID":"4b80cf65-5f11-4557-aaff-5fd609f41fbd","Type":"ContainerStarted","Data":"bd2dacaf746020228a53653113061b3949af7624ea78dbe048e6193380a87b96"} Jan 07 04:21:22 crc kubenswrapper[4980]: I0107 04:21:22.712081 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lllsq" event={"ID":"c1350e91-a11f-4116-a6f0-47173d84fbb9","Type":"ContainerStarted","Data":"a0adc03a01249f2611ffdc522e2cefbf8e14c1b12cb9e4bd469e8633c7e21e85"} Jan 07 04:21:22 crc kubenswrapper[4980]: I0107 04:21:22.735204 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lllsq" podStartSLOduration=2.005397609 podStartE2EDuration="4.735186012s" podCreationTimestamp="2026-01-07 04:21:18 +0000 UTC" firstStartedPulling="2026-01-07 04:21:19.652137256 +0000 UTC m=+2926.217831991" lastFinishedPulling="2026-01-07 04:21:22.381925629 +0000 UTC m=+2928.947620394" observedRunningTime="2026-01-07 04:21:22.727886905 +0000 UTC m=+2929.293581640" watchObservedRunningTime="2026-01-07 04:21:22.735186012 +0000 UTC m=+2929.300880747" Jan 07 04:21:23 crc kubenswrapper[4980]: I0107 04:21:23.726499 4980 generic.go:334] "Generic (PLEG): container finished" podID="4b80cf65-5f11-4557-aaff-5fd609f41fbd" containerID="bd2dacaf746020228a53653113061b3949af7624ea78dbe048e6193380a87b96" exitCode=0 Jan 07 04:21:23 crc kubenswrapper[4980]: I0107 04:21:23.726604 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9jkr" event={"ID":"4b80cf65-5f11-4557-aaff-5fd609f41fbd","Type":"ContainerDied","Data":"bd2dacaf746020228a53653113061b3949af7624ea78dbe048e6193380a87b96"} Jan 07 04:21:24 crc kubenswrapper[4980]: I0107 04:21:24.740839 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9jkr" event={"ID":"4b80cf65-5f11-4557-aaff-5fd609f41fbd","Type":"ContainerStarted","Data":"d91af77460883022582b83f63778c25b692cc6aa218799f1f8c18fafc7a82b62"} Jan 07 04:21:24 crc kubenswrapper[4980]: I0107 04:21:24.768783 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s9jkr" podStartSLOduration=3.237545007 podStartE2EDuration="6.768757654s" podCreationTimestamp="2026-01-07 04:21:18 +0000 UTC" firstStartedPulling="2026-01-07 04:21:20.668240967 +0000 UTC m=+2927.233935712" lastFinishedPulling="2026-01-07 04:21:24.199453624 +0000 UTC m=+2930.765148359" observedRunningTime="2026-01-07 04:21:24.759406805 +0000 UTC m=+2931.325101550" watchObservedRunningTime="2026-01-07 04:21:24.768757654 +0000 UTC m=+2931.334452419" Jan 07 04:21:28 crc kubenswrapper[4980]: I0107 04:21:28.816982 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lllsq" Jan 07 04:21:28 crc kubenswrapper[4980]: I0107 04:21:28.817652 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lllsq" Jan 07 04:21:28 crc kubenswrapper[4980]: I0107 04:21:28.896425 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lllsq" Jan 07 04:21:28 crc kubenswrapper[4980]: I0107 04:21:28.970164 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s9jkr" Jan 07 04:21:28 crc kubenswrapper[4980]: I0107 04:21:28.970232 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s9jkr" Jan 07 04:21:29 crc kubenswrapper[4980]: I0107 04:21:29.050461 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s9jkr" Jan 07 04:21:29 crc kubenswrapper[4980]: I0107 04:21:29.854603 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s9jkr" Jan 07 04:21:29 crc kubenswrapper[4980]: I0107 04:21:29.856853 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lllsq" Jan 07 04:21:30 crc kubenswrapper[4980]: I0107 04:21:30.434273 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s9jkr"] Jan 07 04:21:30 crc kubenswrapper[4980]: I0107 04:21:30.735779 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:21:30 crc kubenswrapper[4980]: E0107 04:21:30.736101 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:21:31 crc kubenswrapper[4980]: I0107 04:21:31.827145 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s9jkr" podUID="4b80cf65-5f11-4557-aaff-5fd609f41fbd" containerName="registry-server" containerID="cri-o://d91af77460883022582b83f63778c25b692cc6aa218799f1f8c18fafc7a82b62" gracePeriod=2 Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.235691 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lllsq"] Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.236193 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lllsq" podUID="c1350e91-a11f-4116-a6f0-47173d84fbb9" containerName="registry-server" containerID="cri-o://a0adc03a01249f2611ffdc522e2cefbf8e14c1b12cb9e4bd469e8633c7e21e85" gracePeriod=2 Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.479162 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9jkr" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.628525 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b80cf65-5f11-4557-aaff-5fd609f41fbd-utilities\") pod \"4b80cf65-5f11-4557-aaff-5fd609f41fbd\" (UID: \"4b80cf65-5f11-4557-aaff-5fd609f41fbd\") " Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.628658 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqknp\" (UniqueName: \"kubernetes.io/projected/4b80cf65-5f11-4557-aaff-5fd609f41fbd-kube-api-access-vqknp\") pod \"4b80cf65-5f11-4557-aaff-5fd609f41fbd\" (UID: \"4b80cf65-5f11-4557-aaff-5fd609f41fbd\") " Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.628700 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b80cf65-5f11-4557-aaff-5fd609f41fbd-catalog-content\") pod \"4b80cf65-5f11-4557-aaff-5fd609f41fbd\" (UID: \"4b80cf65-5f11-4557-aaff-5fd609f41fbd\") " Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.629552 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b80cf65-5f11-4557-aaff-5fd609f41fbd-utilities" (OuterVolumeSpecName: "utilities") pod "4b80cf65-5f11-4557-aaff-5fd609f41fbd" (UID: "4b80cf65-5f11-4557-aaff-5fd609f41fbd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.637684 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b80cf65-5f11-4557-aaff-5fd609f41fbd-kube-api-access-vqknp" (OuterVolumeSpecName: "kube-api-access-vqknp") pod "4b80cf65-5f11-4557-aaff-5fd609f41fbd" (UID: "4b80cf65-5f11-4557-aaff-5fd609f41fbd"). InnerVolumeSpecName "kube-api-access-vqknp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.667764 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lllsq" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.677738 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b80cf65-5f11-4557-aaff-5fd609f41fbd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b80cf65-5f11-4557-aaff-5fd609f41fbd" (UID: "4b80cf65-5f11-4557-aaff-5fd609f41fbd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.730729 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b80cf65-5f11-4557-aaff-5fd609f41fbd-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.730761 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqknp\" (UniqueName: \"kubernetes.io/projected/4b80cf65-5f11-4557-aaff-5fd609f41fbd-kube-api-access-vqknp\") on node \"crc\" DevicePath \"\"" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.730772 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b80cf65-5f11-4557-aaff-5fd609f41fbd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.832172 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1350e91-a11f-4116-a6f0-47173d84fbb9-utilities\") pod \"c1350e91-a11f-4116-a6f0-47173d84fbb9\" (UID: \"c1350e91-a11f-4116-a6f0-47173d84fbb9\") " Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.832246 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1350e91-a11f-4116-a6f0-47173d84fbb9-catalog-content\") pod \"c1350e91-a11f-4116-a6f0-47173d84fbb9\" (UID: \"c1350e91-a11f-4116-a6f0-47173d84fbb9\") " Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.832394 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ljls\" (UniqueName: \"kubernetes.io/projected/c1350e91-a11f-4116-a6f0-47173d84fbb9-kube-api-access-8ljls\") pod \"c1350e91-a11f-4116-a6f0-47173d84fbb9\" (UID: \"c1350e91-a11f-4116-a6f0-47173d84fbb9\") " Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.832764 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1350e91-a11f-4116-a6f0-47173d84fbb9-utilities" (OuterVolumeSpecName: "utilities") pod "c1350e91-a11f-4116-a6f0-47173d84fbb9" (UID: "c1350e91-a11f-4116-a6f0-47173d84fbb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.832854 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1350e91-a11f-4116-a6f0-47173d84fbb9-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.835919 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1350e91-a11f-4116-a6f0-47173d84fbb9-kube-api-access-8ljls" (OuterVolumeSpecName: "kube-api-access-8ljls") pod "c1350e91-a11f-4116-a6f0-47173d84fbb9" (UID: "c1350e91-a11f-4116-a6f0-47173d84fbb9"). InnerVolumeSpecName "kube-api-access-8ljls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.839296 4980 generic.go:334] "Generic (PLEG): container finished" podID="c1350e91-a11f-4116-a6f0-47173d84fbb9" containerID="a0adc03a01249f2611ffdc522e2cefbf8e14c1b12cb9e4bd469e8633c7e21e85" exitCode=0 Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.839344 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lllsq" event={"ID":"c1350e91-a11f-4116-a6f0-47173d84fbb9","Type":"ContainerDied","Data":"a0adc03a01249f2611ffdc522e2cefbf8e14c1b12cb9e4bd469e8633c7e21e85"} Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.839359 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lllsq" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.839397 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lllsq" event={"ID":"c1350e91-a11f-4116-a6f0-47173d84fbb9","Type":"ContainerDied","Data":"2a2e56440f57c2aca702a84228bce82f55ff8a26dc2449277978d70b3144eb54"} Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.839424 4980 scope.go:117] "RemoveContainer" containerID="a0adc03a01249f2611ffdc522e2cefbf8e14c1b12cb9e4bd469e8633c7e21e85" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.844757 4980 generic.go:334] "Generic (PLEG): container finished" podID="4b80cf65-5f11-4557-aaff-5fd609f41fbd" containerID="d91af77460883022582b83f63778c25b692cc6aa218799f1f8c18fafc7a82b62" exitCode=0 Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.844813 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9jkr" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.844817 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9jkr" event={"ID":"4b80cf65-5f11-4557-aaff-5fd609f41fbd","Type":"ContainerDied","Data":"d91af77460883022582b83f63778c25b692cc6aa218799f1f8c18fafc7a82b62"} Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.844865 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9jkr" event={"ID":"4b80cf65-5f11-4557-aaff-5fd609f41fbd","Type":"ContainerDied","Data":"1f73e354a8b8ac2725c265d8764d4f3f1902637774ea4a16dd87ebe357d2019f"} Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.864886 4980 scope.go:117] "RemoveContainer" containerID="f61372947ab247fd952e3ab1cb51e589fd653076549813d7c3b5b537d5d90955" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.896141 4980 scope.go:117] "RemoveContainer" containerID="e22a539b02e11258db65bea2b19e175c137005b2de4e5e9d919daed9d819bd2f" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.897457 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1350e91-a11f-4116-a6f0-47173d84fbb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1350e91-a11f-4116-a6f0-47173d84fbb9" (UID: "c1350e91-a11f-4116-a6f0-47173d84fbb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.897666 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s9jkr"] Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.904928 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s9jkr"] Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.918111 4980 scope.go:117] "RemoveContainer" containerID="a0adc03a01249f2611ffdc522e2cefbf8e14c1b12cb9e4bd469e8633c7e21e85" Jan 07 04:21:32 crc kubenswrapper[4980]: E0107 04:21:32.918541 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0adc03a01249f2611ffdc522e2cefbf8e14c1b12cb9e4bd469e8633c7e21e85\": container with ID starting with a0adc03a01249f2611ffdc522e2cefbf8e14c1b12cb9e4bd469e8633c7e21e85 not found: ID does not exist" containerID="a0adc03a01249f2611ffdc522e2cefbf8e14c1b12cb9e4bd469e8633c7e21e85" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.918584 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0adc03a01249f2611ffdc522e2cefbf8e14c1b12cb9e4bd469e8633c7e21e85"} err="failed to get container status \"a0adc03a01249f2611ffdc522e2cefbf8e14c1b12cb9e4bd469e8633c7e21e85\": rpc error: code = NotFound desc = could not find container \"a0adc03a01249f2611ffdc522e2cefbf8e14c1b12cb9e4bd469e8633c7e21e85\": container with ID starting with a0adc03a01249f2611ffdc522e2cefbf8e14c1b12cb9e4bd469e8633c7e21e85 not found: ID does not exist" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.918605 4980 scope.go:117] "RemoveContainer" containerID="f61372947ab247fd952e3ab1cb51e589fd653076549813d7c3b5b537d5d90955" Jan 07 04:21:32 crc kubenswrapper[4980]: E0107 04:21:32.918851 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f61372947ab247fd952e3ab1cb51e589fd653076549813d7c3b5b537d5d90955\": container with ID starting with f61372947ab247fd952e3ab1cb51e589fd653076549813d7c3b5b537d5d90955 not found: ID does not exist" containerID="f61372947ab247fd952e3ab1cb51e589fd653076549813d7c3b5b537d5d90955" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.918874 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61372947ab247fd952e3ab1cb51e589fd653076549813d7c3b5b537d5d90955"} err="failed to get container status \"f61372947ab247fd952e3ab1cb51e589fd653076549813d7c3b5b537d5d90955\": rpc error: code = NotFound desc = could not find container \"f61372947ab247fd952e3ab1cb51e589fd653076549813d7c3b5b537d5d90955\": container with ID starting with f61372947ab247fd952e3ab1cb51e589fd653076549813d7c3b5b537d5d90955 not found: ID does not exist" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.918886 4980 scope.go:117] "RemoveContainer" containerID="e22a539b02e11258db65bea2b19e175c137005b2de4e5e9d919daed9d819bd2f" Jan 07 04:21:32 crc kubenswrapper[4980]: E0107 04:21:32.919170 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e22a539b02e11258db65bea2b19e175c137005b2de4e5e9d919daed9d819bd2f\": container with ID starting with e22a539b02e11258db65bea2b19e175c137005b2de4e5e9d919daed9d819bd2f not found: ID does not exist" containerID="e22a539b02e11258db65bea2b19e175c137005b2de4e5e9d919daed9d819bd2f" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.919214 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e22a539b02e11258db65bea2b19e175c137005b2de4e5e9d919daed9d819bd2f"} err="failed to get container status \"e22a539b02e11258db65bea2b19e175c137005b2de4e5e9d919daed9d819bd2f\": rpc error: code = NotFound desc = could not find container \"e22a539b02e11258db65bea2b19e175c137005b2de4e5e9d919daed9d819bd2f\": container with ID starting with e22a539b02e11258db65bea2b19e175c137005b2de4e5e9d919daed9d819bd2f not found: ID does not exist" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.919246 4980 scope.go:117] "RemoveContainer" containerID="d91af77460883022582b83f63778c25b692cc6aa218799f1f8c18fafc7a82b62" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.934030 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ljls\" (UniqueName: \"kubernetes.io/projected/c1350e91-a11f-4116-a6f0-47173d84fbb9-kube-api-access-8ljls\") on node \"crc\" DevicePath \"\"" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.934060 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1350e91-a11f-4116-a6f0-47173d84fbb9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.941869 4980 scope.go:117] "RemoveContainer" containerID="bd2dacaf746020228a53653113061b3949af7624ea78dbe048e6193380a87b96" Jan 07 04:21:32 crc kubenswrapper[4980]: I0107 04:21:32.974611 4980 scope.go:117] "RemoveContainer" containerID="8990ae7707c6daf98bc0c8457f71eb8424e54e6755f97f15e1c2089131861b04" Jan 07 04:21:33 crc kubenswrapper[4980]: I0107 04:21:33.062306 4980 scope.go:117] "RemoveContainer" containerID="d91af77460883022582b83f63778c25b692cc6aa218799f1f8c18fafc7a82b62" Jan 07 04:21:33 crc kubenswrapper[4980]: E0107 04:21:33.062886 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d91af77460883022582b83f63778c25b692cc6aa218799f1f8c18fafc7a82b62\": container with ID starting with d91af77460883022582b83f63778c25b692cc6aa218799f1f8c18fafc7a82b62 not found: ID does not exist" containerID="d91af77460883022582b83f63778c25b692cc6aa218799f1f8c18fafc7a82b62" Jan 07 04:21:33 crc kubenswrapper[4980]: I0107 04:21:33.062916 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d91af77460883022582b83f63778c25b692cc6aa218799f1f8c18fafc7a82b62"} err="failed to get container status \"d91af77460883022582b83f63778c25b692cc6aa218799f1f8c18fafc7a82b62\": rpc error: code = NotFound desc = could not find container \"d91af77460883022582b83f63778c25b692cc6aa218799f1f8c18fafc7a82b62\": container with ID starting with d91af77460883022582b83f63778c25b692cc6aa218799f1f8c18fafc7a82b62 not found: ID does not exist" Jan 07 04:21:33 crc kubenswrapper[4980]: I0107 04:21:33.062936 4980 scope.go:117] "RemoveContainer" containerID="bd2dacaf746020228a53653113061b3949af7624ea78dbe048e6193380a87b96" Jan 07 04:21:33 crc kubenswrapper[4980]: E0107 04:21:33.063486 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd2dacaf746020228a53653113061b3949af7624ea78dbe048e6193380a87b96\": container with ID starting with bd2dacaf746020228a53653113061b3949af7624ea78dbe048e6193380a87b96 not found: ID does not exist" containerID="bd2dacaf746020228a53653113061b3949af7624ea78dbe048e6193380a87b96" Jan 07 04:21:33 crc kubenswrapper[4980]: I0107 04:21:33.063512 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2dacaf746020228a53653113061b3949af7624ea78dbe048e6193380a87b96"} err="failed to get container status \"bd2dacaf746020228a53653113061b3949af7624ea78dbe048e6193380a87b96\": rpc error: code = NotFound desc = could not find container \"bd2dacaf746020228a53653113061b3949af7624ea78dbe048e6193380a87b96\": container with ID starting with bd2dacaf746020228a53653113061b3949af7624ea78dbe048e6193380a87b96 not found: ID does not exist" Jan 07 04:21:33 crc kubenswrapper[4980]: I0107 04:21:33.063529 4980 scope.go:117] "RemoveContainer" containerID="8990ae7707c6daf98bc0c8457f71eb8424e54e6755f97f15e1c2089131861b04" Jan 07 04:21:33 crc kubenswrapper[4980]: E0107 04:21:33.063934 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8990ae7707c6daf98bc0c8457f71eb8424e54e6755f97f15e1c2089131861b04\": container with ID starting with 8990ae7707c6daf98bc0c8457f71eb8424e54e6755f97f15e1c2089131861b04 not found: ID does not exist" containerID="8990ae7707c6daf98bc0c8457f71eb8424e54e6755f97f15e1c2089131861b04" Jan 07 04:21:33 crc kubenswrapper[4980]: I0107 04:21:33.063993 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8990ae7707c6daf98bc0c8457f71eb8424e54e6755f97f15e1c2089131861b04"} err="failed to get container status \"8990ae7707c6daf98bc0c8457f71eb8424e54e6755f97f15e1c2089131861b04\": rpc error: code = NotFound desc = could not find container \"8990ae7707c6daf98bc0c8457f71eb8424e54e6755f97f15e1c2089131861b04\": container with ID starting with 8990ae7707c6daf98bc0c8457f71eb8424e54e6755f97f15e1c2089131861b04 not found: ID does not exist" Jan 07 04:21:33 crc kubenswrapper[4980]: I0107 04:21:33.186428 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lllsq"] Jan 07 04:21:33 crc kubenswrapper[4980]: I0107 04:21:33.195513 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lllsq"] Jan 07 04:21:33 crc kubenswrapper[4980]: I0107 04:21:33.746194 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b80cf65-5f11-4557-aaff-5fd609f41fbd" path="/var/lib/kubelet/pods/4b80cf65-5f11-4557-aaff-5fd609f41fbd/volumes" Jan 07 04:21:33 crc kubenswrapper[4980]: I0107 04:21:33.746890 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1350e91-a11f-4116-a6f0-47173d84fbb9" path="/var/lib/kubelet/pods/c1350e91-a11f-4116-a6f0-47173d84fbb9/volumes" Jan 07 04:21:42 crc kubenswrapper[4980]: I0107 04:21:42.736191 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:21:42 crc kubenswrapper[4980]: E0107 04:21:42.737713 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:21:56 crc kubenswrapper[4980]: I0107 04:21:56.736457 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:21:56 crc kubenswrapper[4980]: E0107 04:21:56.737337 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:22:09 crc kubenswrapper[4980]: I0107 04:22:09.736252 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:22:09 crc kubenswrapper[4980]: E0107 04:22:09.737358 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:22:23 crc kubenswrapper[4980]: I0107 04:22:23.747597 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:22:23 crc kubenswrapper[4980]: E0107 04:22:23.748674 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:22:36 crc kubenswrapper[4980]: I0107 04:22:36.736236 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:22:36 crc kubenswrapper[4980]: E0107 04:22:36.737269 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:22:48 crc kubenswrapper[4980]: I0107 04:22:48.736166 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:22:48 crc kubenswrapper[4980]: E0107 04:22:48.737232 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:23:00 crc kubenswrapper[4980]: I0107 04:23:00.739063 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:23:00 crc kubenswrapper[4980]: E0107 04:23:00.740034 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:23:14 crc kubenswrapper[4980]: I0107 04:23:14.735851 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:23:14 crc kubenswrapper[4980]: E0107 04:23:14.736880 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:23:26 crc kubenswrapper[4980]: I0107 04:23:26.736177 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:23:26 crc kubenswrapper[4980]: E0107 04:23:26.737348 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:23:39 crc kubenswrapper[4980]: I0107 04:23:39.736112 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:23:39 crc kubenswrapper[4980]: E0107 04:23:39.737196 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:23:52 crc kubenswrapper[4980]: I0107 04:23:52.735875 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:23:52 crc kubenswrapper[4980]: E0107 04:23:52.737073 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:24:07 crc kubenswrapper[4980]: I0107 04:24:07.735772 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:24:07 crc kubenswrapper[4980]: E0107 04:24:07.736824 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:24:22 crc kubenswrapper[4980]: I0107 04:24:22.736812 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:24:22 crc kubenswrapper[4980]: E0107 04:24:22.738144 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:24:37 crc kubenswrapper[4980]: I0107 04:24:37.735865 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:24:37 crc kubenswrapper[4980]: E0107 04:24:37.737098 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:24:50 crc kubenswrapper[4980]: I0107 04:24:50.736058 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:24:50 crc kubenswrapper[4980]: E0107 04:24:50.737016 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:25:01 crc kubenswrapper[4980]: I0107 04:25:01.736019 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:25:01 crc kubenswrapper[4980]: E0107 04:25:01.737202 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:25:16 crc kubenswrapper[4980]: I0107 04:25:16.736184 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:25:16 crc kubenswrapper[4980]: E0107 04:25:16.737246 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:25:30 crc kubenswrapper[4980]: I0107 04:25:30.736506 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:25:30 crc kubenswrapper[4980]: E0107 04:25:30.739457 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:25:45 crc kubenswrapper[4980]: I0107 04:25:45.735831 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:25:45 crc kubenswrapper[4980]: E0107 04:25:45.738131 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:25:59 crc kubenswrapper[4980]: I0107 04:25:59.736023 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:25:59 crc kubenswrapper[4980]: E0107 04:25:59.737070 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:26:10 crc kubenswrapper[4980]: I0107 04:26:10.736045 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:26:11 crc kubenswrapper[4980]: I0107 04:26:11.611357 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"a4e0d362ab2284311b58de890383de28b02d5e7e847e7d4a626a08ea7525bf0f"} Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.621825 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lr4vz"] Jan 07 04:26:40 crc kubenswrapper[4980]: E0107 04:26:40.625221 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b80cf65-5f11-4557-aaff-5fd609f41fbd" containerName="registry-server" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.625432 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b80cf65-5f11-4557-aaff-5fd609f41fbd" containerName="registry-server" Jan 07 04:26:40 crc kubenswrapper[4980]: E0107 04:26:40.625648 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b80cf65-5f11-4557-aaff-5fd609f41fbd" containerName="extract-utilities" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.625825 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b80cf65-5f11-4557-aaff-5fd609f41fbd" containerName="extract-utilities" Jan 07 04:26:40 crc kubenswrapper[4980]: E0107 04:26:40.626005 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b80cf65-5f11-4557-aaff-5fd609f41fbd" containerName="extract-content" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.626142 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b80cf65-5f11-4557-aaff-5fd609f41fbd" containerName="extract-content" Jan 07 04:26:40 crc kubenswrapper[4980]: E0107 04:26:40.626335 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1350e91-a11f-4116-a6f0-47173d84fbb9" containerName="extract-content" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.626492 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1350e91-a11f-4116-a6f0-47173d84fbb9" containerName="extract-content" Jan 07 04:26:40 crc kubenswrapper[4980]: E0107 04:26:40.626688 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1350e91-a11f-4116-a6f0-47173d84fbb9" containerName="registry-server" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.626839 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1350e91-a11f-4116-a6f0-47173d84fbb9" containerName="registry-server" Jan 07 04:26:40 crc kubenswrapper[4980]: E0107 04:26:40.627018 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1350e91-a11f-4116-a6f0-47173d84fbb9" containerName="extract-utilities" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.627157 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1350e91-a11f-4116-a6f0-47173d84fbb9" containerName="extract-utilities" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.627689 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1350e91-a11f-4116-a6f0-47173d84fbb9" containerName="registry-server" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.628113 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b80cf65-5f11-4557-aaff-5fd609f41fbd" containerName="registry-server" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.631237 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lr4vz" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.647736 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lr4vz"] Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.744502 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b6c3e72-ded0-474d-8585-40a5963a99d5-utilities\") pod \"redhat-marketplace-lr4vz\" (UID: \"9b6c3e72-ded0-474d-8585-40a5963a99d5\") " pod="openshift-marketplace/redhat-marketplace-lr4vz" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.744871 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b6c3e72-ded0-474d-8585-40a5963a99d5-catalog-content\") pod \"redhat-marketplace-lr4vz\" (UID: \"9b6c3e72-ded0-474d-8585-40a5963a99d5\") " pod="openshift-marketplace/redhat-marketplace-lr4vz" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.744942 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s82sj\" (UniqueName: \"kubernetes.io/projected/9b6c3e72-ded0-474d-8585-40a5963a99d5-kube-api-access-s82sj\") pod \"redhat-marketplace-lr4vz\" (UID: \"9b6c3e72-ded0-474d-8585-40a5963a99d5\") " pod="openshift-marketplace/redhat-marketplace-lr4vz" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.846867 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b6c3e72-ded0-474d-8585-40a5963a99d5-utilities\") pod \"redhat-marketplace-lr4vz\" (UID: \"9b6c3e72-ded0-474d-8585-40a5963a99d5\") " pod="openshift-marketplace/redhat-marketplace-lr4vz" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.847071 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b6c3e72-ded0-474d-8585-40a5963a99d5-catalog-content\") pod \"redhat-marketplace-lr4vz\" (UID: \"9b6c3e72-ded0-474d-8585-40a5963a99d5\") " pod="openshift-marketplace/redhat-marketplace-lr4vz" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.847106 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s82sj\" (UniqueName: \"kubernetes.io/projected/9b6c3e72-ded0-474d-8585-40a5963a99d5-kube-api-access-s82sj\") pod \"redhat-marketplace-lr4vz\" (UID: \"9b6c3e72-ded0-474d-8585-40a5963a99d5\") " pod="openshift-marketplace/redhat-marketplace-lr4vz" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.847329 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b6c3e72-ded0-474d-8585-40a5963a99d5-utilities\") pod \"redhat-marketplace-lr4vz\" (UID: \"9b6c3e72-ded0-474d-8585-40a5963a99d5\") " pod="openshift-marketplace/redhat-marketplace-lr4vz" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.848044 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b6c3e72-ded0-474d-8585-40a5963a99d5-catalog-content\") pod \"redhat-marketplace-lr4vz\" (UID: \"9b6c3e72-ded0-474d-8585-40a5963a99d5\") " pod="openshift-marketplace/redhat-marketplace-lr4vz" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.870541 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s82sj\" (UniqueName: \"kubernetes.io/projected/9b6c3e72-ded0-474d-8585-40a5963a99d5-kube-api-access-s82sj\") pod \"redhat-marketplace-lr4vz\" (UID: \"9b6c3e72-ded0-474d-8585-40a5963a99d5\") " pod="openshift-marketplace/redhat-marketplace-lr4vz" Jan 07 04:26:40 crc kubenswrapper[4980]: I0107 04:26:40.952850 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lr4vz" Jan 07 04:26:41 crc kubenswrapper[4980]: I0107 04:26:41.480940 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lr4vz"] Jan 07 04:26:41 crc kubenswrapper[4980]: I0107 04:26:41.988749 4980 generic.go:334] "Generic (PLEG): container finished" podID="9b6c3e72-ded0-474d-8585-40a5963a99d5" containerID="343d2bb78b53b0e7176e7ffc6b5bb9197c717708b0bd0a2471db89aeba453475" exitCode=0 Jan 07 04:26:41 crc kubenswrapper[4980]: I0107 04:26:41.988859 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lr4vz" event={"ID":"9b6c3e72-ded0-474d-8585-40a5963a99d5","Type":"ContainerDied","Data":"343d2bb78b53b0e7176e7ffc6b5bb9197c717708b0bd0a2471db89aeba453475"} Jan 07 04:26:41 crc kubenswrapper[4980]: I0107 04:26:41.989048 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lr4vz" event={"ID":"9b6c3e72-ded0-474d-8585-40a5963a99d5","Type":"ContainerStarted","Data":"5b58aae333dacc5d257490f136d46ec5f19bf9404edbfb1bc5e25c6acc25dd57"} Jan 07 04:26:41 crc kubenswrapper[4980]: I0107 04:26:41.990810 4980 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 04:26:43 crc kubenswrapper[4980]: I0107 04:26:43.001136 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lr4vz" event={"ID":"9b6c3e72-ded0-474d-8585-40a5963a99d5","Type":"ContainerStarted","Data":"747ae903c540cece52df456086cd57f591c3db71d661f5aafe56d171ee9c5ffe"} Jan 07 04:26:44 crc kubenswrapper[4980]: I0107 04:26:44.018655 4980 generic.go:334] "Generic (PLEG): container finished" podID="9b6c3e72-ded0-474d-8585-40a5963a99d5" containerID="747ae903c540cece52df456086cd57f591c3db71d661f5aafe56d171ee9c5ffe" exitCode=0 Jan 07 04:26:44 crc kubenswrapper[4980]: I0107 04:26:44.018761 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lr4vz" event={"ID":"9b6c3e72-ded0-474d-8585-40a5963a99d5","Type":"ContainerDied","Data":"747ae903c540cece52df456086cd57f591c3db71d661f5aafe56d171ee9c5ffe"} Jan 07 04:26:44 crc kubenswrapper[4980]: I0107 04:26:44.019273 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lr4vz" event={"ID":"9b6c3e72-ded0-474d-8585-40a5963a99d5","Type":"ContainerStarted","Data":"320907896cfd00fa22a0d3ce8ab7732bd3da0e81b01f89ea2c50378336287932"} Jan 07 04:26:44 crc kubenswrapper[4980]: I0107 04:26:44.043006 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lr4vz" podStartSLOduration=2.4939824010000002 podStartE2EDuration="4.042982355s" podCreationTimestamp="2026-01-07 04:26:40 +0000 UTC" firstStartedPulling="2026-01-07 04:26:41.990520357 +0000 UTC m=+3248.556215102" lastFinishedPulling="2026-01-07 04:26:43.539520321 +0000 UTC m=+3250.105215056" observedRunningTime="2026-01-07 04:26:44.038779097 +0000 UTC m=+3250.604473842" watchObservedRunningTime="2026-01-07 04:26:44.042982355 +0000 UTC m=+3250.608677090" Jan 07 04:26:50 crc kubenswrapper[4980]: I0107 04:26:50.953042 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lr4vz" Jan 07 04:26:50 crc kubenswrapper[4980]: I0107 04:26:50.955038 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lr4vz" Jan 07 04:26:51 crc kubenswrapper[4980]: I0107 04:26:51.031540 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lr4vz" Jan 07 04:26:51 crc kubenswrapper[4980]: I0107 04:26:51.168005 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lr4vz" Jan 07 04:26:51 crc kubenswrapper[4980]: I0107 04:26:51.268344 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lr4vz"] Jan 07 04:26:53 crc kubenswrapper[4980]: I0107 04:26:53.125504 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lr4vz" podUID="9b6c3e72-ded0-474d-8585-40a5963a99d5" containerName="registry-server" containerID="cri-o://320907896cfd00fa22a0d3ce8ab7732bd3da0e81b01f89ea2c50378336287932" gracePeriod=2 Jan 07 04:26:53 crc kubenswrapper[4980]: I0107 04:26:53.751913 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lr4vz" Jan 07 04:26:53 crc kubenswrapper[4980]: I0107 04:26:53.797747 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b6c3e72-ded0-474d-8585-40a5963a99d5-utilities\") pod \"9b6c3e72-ded0-474d-8585-40a5963a99d5\" (UID: \"9b6c3e72-ded0-474d-8585-40a5963a99d5\") " Jan 07 04:26:53 crc kubenswrapper[4980]: I0107 04:26:53.797798 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b6c3e72-ded0-474d-8585-40a5963a99d5-catalog-content\") pod \"9b6c3e72-ded0-474d-8585-40a5963a99d5\" (UID: \"9b6c3e72-ded0-474d-8585-40a5963a99d5\") " Jan 07 04:26:53 crc kubenswrapper[4980]: I0107 04:26:53.797892 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s82sj\" (UniqueName: \"kubernetes.io/projected/9b6c3e72-ded0-474d-8585-40a5963a99d5-kube-api-access-s82sj\") pod \"9b6c3e72-ded0-474d-8585-40a5963a99d5\" (UID: \"9b6c3e72-ded0-474d-8585-40a5963a99d5\") " Jan 07 04:26:53 crc kubenswrapper[4980]: I0107 04:26:53.800821 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b6c3e72-ded0-474d-8585-40a5963a99d5-utilities" (OuterVolumeSpecName: "utilities") pod "9b6c3e72-ded0-474d-8585-40a5963a99d5" (UID: "9b6c3e72-ded0-474d-8585-40a5963a99d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:26:53 crc kubenswrapper[4980]: I0107 04:26:53.809117 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b6c3e72-ded0-474d-8585-40a5963a99d5-kube-api-access-s82sj" (OuterVolumeSpecName: "kube-api-access-s82sj") pod "9b6c3e72-ded0-474d-8585-40a5963a99d5" (UID: "9b6c3e72-ded0-474d-8585-40a5963a99d5"). InnerVolumeSpecName "kube-api-access-s82sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:26:53 crc kubenswrapper[4980]: I0107 04:26:53.820171 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b6c3e72-ded0-474d-8585-40a5963a99d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b6c3e72-ded0-474d-8585-40a5963a99d5" (UID: "9b6c3e72-ded0-474d-8585-40a5963a99d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:26:53 crc kubenswrapper[4980]: I0107 04:26:53.900498 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b6c3e72-ded0-474d-8585-40a5963a99d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 04:26:53 crc kubenswrapper[4980]: I0107 04:26:53.900544 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b6c3e72-ded0-474d-8585-40a5963a99d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 04:26:53 crc kubenswrapper[4980]: I0107 04:26:53.900572 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s82sj\" (UniqueName: \"kubernetes.io/projected/9b6c3e72-ded0-474d-8585-40a5963a99d5-kube-api-access-s82sj\") on node \"crc\" DevicePath \"\"" Jan 07 04:26:54 crc kubenswrapper[4980]: I0107 04:26:54.143463 4980 generic.go:334] "Generic (PLEG): container finished" podID="9b6c3e72-ded0-474d-8585-40a5963a99d5" containerID="320907896cfd00fa22a0d3ce8ab7732bd3da0e81b01f89ea2c50378336287932" exitCode=0 Jan 07 04:26:54 crc kubenswrapper[4980]: I0107 04:26:54.143526 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lr4vz" event={"ID":"9b6c3e72-ded0-474d-8585-40a5963a99d5","Type":"ContainerDied","Data":"320907896cfd00fa22a0d3ce8ab7732bd3da0e81b01f89ea2c50378336287932"} Jan 07 04:26:54 crc kubenswrapper[4980]: I0107 04:26:54.143607 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lr4vz" event={"ID":"9b6c3e72-ded0-474d-8585-40a5963a99d5","Type":"ContainerDied","Data":"5b58aae333dacc5d257490f136d46ec5f19bf9404edbfb1bc5e25c6acc25dd57"} Jan 07 04:26:54 crc kubenswrapper[4980]: I0107 04:26:54.143613 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lr4vz" Jan 07 04:26:54 crc kubenswrapper[4980]: I0107 04:26:54.143638 4980 scope.go:117] "RemoveContainer" containerID="320907896cfd00fa22a0d3ce8ab7732bd3da0e81b01f89ea2c50378336287932" Jan 07 04:26:54 crc kubenswrapper[4980]: I0107 04:26:54.179941 4980 scope.go:117] "RemoveContainer" containerID="747ae903c540cece52df456086cd57f591c3db71d661f5aafe56d171ee9c5ffe" Jan 07 04:26:54 crc kubenswrapper[4980]: I0107 04:26:54.214722 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lr4vz"] Jan 07 04:26:54 crc kubenswrapper[4980]: I0107 04:26:54.225697 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lr4vz"] Jan 07 04:26:54 crc kubenswrapper[4980]: I0107 04:26:54.235842 4980 scope.go:117] "RemoveContainer" containerID="343d2bb78b53b0e7176e7ffc6b5bb9197c717708b0bd0a2471db89aeba453475" Jan 07 04:26:54 crc kubenswrapper[4980]: I0107 04:26:54.273734 4980 scope.go:117] "RemoveContainer" containerID="320907896cfd00fa22a0d3ce8ab7732bd3da0e81b01f89ea2c50378336287932" Jan 07 04:26:54 crc kubenswrapper[4980]: E0107 04:26:54.274169 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"320907896cfd00fa22a0d3ce8ab7732bd3da0e81b01f89ea2c50378336287932\": container with ID starting with 320907896cfd00fa22a0d3ce8ab7732bd3da0e81b01f89ea2c50378336287932 not found: ID does not exist" containerID="320907896cfd00fa22a0d3ce8ab7732bd3da0e81b01f89ea2c50378336287932" Jan 07 04:26:54 crc kubenswrapper[4980]: I0107 04:26:54.274213 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"320907896cfd00fa22a0d3ce8ab7732bd3da0e81b01f89ea2c50378336287932"} err="failed to get container status \"320907896cfd00fa22a0d3ce8ab7732bd3da0e81b01f89ea2c50378336287932\": rpc error: code = NotFound desc = could not find container \"320907896cfd00fa22a0d3ce8ab7732bd3da0e81b01f89ea2c50378336287932\": container with ID starting with 320907896cfd00fa22a0d3ce8ab7732bd3da0e81b01f89ea2c50378336287932 not found: ID does not exist" Jan 07 04:26:54 crc kubenswrapper[4980]: I0107 04:26:54.274247 4980 scope.go:117] "RemoveContainer" containerID="747ae903c540cece52df456086cd57f591c3db71d661f5aafe56d171ee9c5ffe" Jan 07 04:26:54 crc kubenswrapper[4980]: E0107 04:26:54.275049 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"747ae903c540cece52df456086cd57f591c3db71d661f5aafe56d171ee9c5ffe\": container with ID starting with 747ae903c540cece52df456086cd57f591c3db71d661f5aafe56d171ee9c5ffe not found: ID does not exist" containerID="747ae903c540cece52df456086cd57f591c3db71d661f5aafe56d171ee9c5ffe" Jan 07 04:26:54 crc kubenswrapper[4980]: I0107 04:26:54.275098 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747ae903c540cece52df456086cd57f591c3db71d661f5aafe56d171ee9c5ffe"} err="failed to get container status \"747ae903c540cece52df456086cd57f591c3db71d661f5aafe56d171ee9c5ffe\": rpc error: code = NotFound desc = could not find container \"747ae903c540cece52df456086cd57f591c3db71d661f5aafe56d171ee9c5ffe\": container with ID starting with 747ae903c540cece52df456086cd57f591c3db71d661f5aafe56d171ee9c5ffe not found: ID does not exist" Jan 07 04:26:54 crc kubenswrapper[4980]: I0107 04:26:54.275155 4980 scope.go:117] "RemoveContainer" containerID="343d2bb78b53b0e7176e7ffc6b5bb9197c717708b0bd0a2471db89aeba453475" Jan 07 04:26:54 crc kubenswrapper[4980]: E0107 04:26:54.275810 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"343d2bb78b53b0e7176e7ffc6b5bb9197c717708b0bd0a2471db89aeba453475\": container with ID starting with 343d2bb78b53b0e7176e7ffc6b5bb9197c717708b0bd0a2471db89aeba453475 not found: ID does not exist" containerID="343d2bb78b53b0e7176e7ffc6b5bb9197c717708b0bd0a2471db89aeba453475" Jan 07 04:26:54 crc kubenswrapper[4980]: I0107 04:26:54.275878 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"343d2bb78b53b0e7176e7ffc6b5bb9197c717708b0bd0a2471db89aeba453475"} err="failed to get container status \"343d2bb78b53b0e7176e7ffc6b5bb9197c717708b0bd0a2471db89aeba453475\": rpc error: code = NotFound desc = could not find container \"343d2bb78b53b0e7176e7ffc6b5bb9197c717708b0bd0a2471db89aeba453475\": container with ID starting with 343d2bb78b53b0e7176e7ffc6b5bb9197c717708b0bd0a2471db89aeba453475 not found: ID does not exist" Jan 07 04:26:55 crc kubenswrapper[4980]: I0107 04:26:55.761831 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b6c3e72-ded0-474d-8585-40a5963a99d5" path="/var/lib/kubelet/pods/9b6c3e72-ded0-474d-8585-40a5963a99d5/volumes" Jan 07 04:28:36 crc kubenswrapper[4980]: I0107 04:28:36.543338 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:28:36 crc kubenswrapper[4980]: I0107 04:28:36.544018 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:28:48 crc kubenswrapper[4980]: I0107 04:28:48.139141 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h7hhp"] Jan 07 04:28:48 crc kubenswrapper[4980]: E0107 04:28:48.142790 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b6c3e72-ded0-474d-8585-40a5963a99d5" containerName="extract-content" Jan 07 04:28:48 crc kubenswrapper[4980]: I0107 04:28:48.142833 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b6c3e72-ded0-474d-8585-40a5963a99d5" containerName="extract-content" Jan 07 04:28:48 crc kubenswrapper[4980]: E0107 04:28:48.142886 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b6c3e72-ded0-474d-8585-40a5963a99d5" containerName="extract-utilities" Jan 07 04:28:48 crc kubenswrapper[4980]: I0107 04:28:48.142903 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b6c3e72-ded0-474d-8585-40a5963a99d5" containerName="extract-utilities" Jan 07 04:28:48 crc kubenswrapper[4980]: E0107 04:28:48.142964 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b6c3e72-ded0-474d-8585-40a5963a99d5" containerName="registry-server" Jan 07 04:28:48 crc kubenswrapper[4980]: I0107 04:28:48.142978 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b6c3e72-ded0-474d-8585-40a5963a99d5" containerName="registry-server" Jan 07 04:28:48 crc kubenswrapper[4980]: I0107 04:28:48.143394 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b6c3e72-ded0-474d-8585-40a5963a99d5" containerName="registry-server" Jan 07 04:28:48 crc kubenswrapper[4980]: I0107 04:28:48.154506 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h7hhp" Jan 07 04:28:48 crc kubenswrapper[4980]: I0107 04:28:48.160351 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h7hhp"] Jan 07 04:28:48 crc kubenswrapper[4980]: I0107 04:28:48.223044 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad1f947-8786-421c-a703-261e6bf4d838-catalog-content\") pod \"redhat-operators-h7hhp\" (UID: \"7ad1f947-8786-421c-a703-261e6bf4d838\") " pod="openshift-marketplace/redhat-operators-h7hhp" Jan 07 04:28:48 crc kubenswrapper[4980]: I0107 04:28:48.223677 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qpgb\" (UniqueName: \"kubernetes.io/projected/7ad1f947-8786-421c-a703-261e6bf4d838-kube-api-access-9qpgb\") pod \"redhat-operators-h7hhp\" (UID: \"7ad1f947-8786-421c-a703-261e6bf4d838\") " pod="openshift-marketplace/redhat-operators-h7hhp" Jan 07 04:28:48 crc kubenswrapper[4980]: I0107 04:28:48.223753 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad1f947-8786-421c-a703-261e6bf4d838-utilities\") pod \"redhat-operators-h7hhp\" (UID: \"7ad1f947-8786-421c-a703-261e6bf4d838\") " pod="openshift-marketplace/redhat-operators-h7hhp" Jan 07 04:28:48 crc kubenswrapper[4980]: I0107 04:28:48.325615 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad1f947-8786-421c-a703-261e6bf4d838-catalog-content\") pod \"redhat-operators-h7hhp\" (UID: \"7ad1f947-8786-421c-a703-261e6bf4d838\") " pod="openshift-marketplace/redhat-operators-h7hhp" Jan 07 04:28:48 crc kubenswrapper[4980]: I0107 04:28:48.325741 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qpgb\" (UniqueName: \"kubernetes.io/projected/7ad1f947-8786-421c-a703-261e6bf4d838-kube-api-access-9qpgb\") pod \"redhat-operators-h7hhp\" (UID: \"7ad1f947-8786-421c-a703-261e6bf4d838\") " pod="openshift-marketplace/redhat-operators-h7hhp" Jan 07 04:28:48 crc kubenswrapper[4980]: I0107 04:28:48.325790 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad1f947-8786-421c-a703-261e6bf4d838-utilities\") pod \"redhat-operators-h7hhp\" (UID: \"7ad1f947-8786-421c-a703-261e6bf4d838\") " pod="openshift-marketplace/redhat-operators-h7hhp" Jan 07 04:28:48 crc kubenswrapper[4980]: I0107 04:28:48.326534 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad1f947-8786-421c-a703-261e6bf4d838-catalog-content\") pod \"redhat-operators-h7hhp\" (UID: \"7ad1f947-8786-421c-a703-261e6bf4d838\") " pod="openshift-marketplace/redhat-operators-h7hhp" Jan 07 04:28:48 crc kubenswrapper[4980]: I0107 04:28:48.326542 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad1f947-8786-421c-a703-261e6bf4d838-utilities\") pod \"redhat-operators-h7hhp\" (UID: \"7ad1f947-8786-421c-a703-261e6bf4d838\") " pod="openshift-marketplace/redhat-operators-h7hhp" Jan 07 04:28:48 crc kubenswrapper[4980]: I0107 04:28:48.352446 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qpgb\" (UniqueName: \"kubernetes.io/projected/7ad1f947-8786-421c-a703-261e6bf4d838-kube-api-access-9qpgb\") pod \"redhat-operators-h7hhp\" (UID: \"7ad1f947-8786-421c-a703-261e6bf4d838\") " pod="openshift-marketplace/redhat-operators-h7hhp" Jan 07 04:28:48 crc kubenswrapper[4980]: I0107 04:28:48.491670 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h7hhp" Jan 07 04:28:49 crc kubenswrapper[4980]: I0107 04:28:49.020960 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h7hhp"] Jan 07 04:28:49 crc kubenswrapper[4980]: I0107 04:28:49.532324 4980 generic.go:334] "Generic (PLEG): container finished" podID="7ad1f947-8786-421c-a703-261e6bf4d838" containerID="81ac6b6bbc6b393ce6354823163a4f9d37739cae5c2f0b5ea1a2c4486bb1906e" exitCode=0 Jan 07 04:28:49 crc kubenswrapper[4980]: I0107 04:28:49.532393 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7hhp" event={"ID":"7ad1f947-8786-421c-a703-261e6bf4d838","Type":"ContainerDied","Data":"81ac6b6bbc6b393ce6354823163a4f9d37739cae5c2f0b5ea1a2c4486bb1906e"} Jan 07 04:28:49 crc kubenswrapper[4980]: I0107 04:28:49.532642 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7hhp" event={"ID":"7ad1f947-8786-421c-a703-261e6bf4d838","Type":"ContainerStarted","Data":"76de9348fddedcd1402c15434f88f489ab1042127b0e409f8d5dae4a7fcbd4a3"} Jan 07 04:28:50 crc kubenswrapper[4980]: I0107 04:28:50.548630 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7hhp" event={"ID":"7ad1f947-8786-421c-a703-261e6bf4d838","Type":"ContainerStarted","Data":"8eeb290acf80fd2dc06e509676104900f9b4f1c3e2732eed60dc6ce394b85bdf"} Jan 07 04:28:52 crc kubenswrapper[4980]: I0107 04:28:52.574648 4980 generic.go:334] "Generic (PLEG): container finished" podID="7ad1f947-8786-421c-a703-261e6bf4d838" containerID="8eeb290acf80fd2dc06e509676104900f9b4f1c3e2732eed60dc6ce394b85bdf" exitCode=0 Jan 07 04:28:52 crc kubenswrapper[4980]: I0107 04:28:52.574759 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7hhp" event={"ID":"7ad1f947-8786-421c-a703-261e6bf4d838","Type":"ContainerDied","Data":"8eeb290acf80fd2dc06e509676104900f9b4f1c3e2732eed60dc6ce394b85bdf"} Jan 07 04:28:53 crc kubenswrapper[4980]: I0107 04:28:53.589627 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7hhp" event={"ID":"7ad1f947-8786-421c-a703-261e6bf4d838","Type":"ContainerStarted","Data":"1539d699b6b03a0ace6061af88905636a54df76ea47d2a11d732ef08b57ee816"} Jan 07 04:28:53 crc kubenswrapper[4980]: I0107 04:28:53.627615 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h7hhp" podStartSLOduration=2.106067437 podStartE2EDuration="5.627595644s" podCreationTimestamp="2026-01-07 04:28:48 +0000 UTC" firstStartedPulling="2026-01-07 04:28:49.535460233 +0000 UTC m=+3376.101154968" lastFinishedPulling="2026-01-07 04:28:53.05698843 +0000 UTC m=+3379.622683175" observedRunningTime="2026-01-07 04:28:53.617286067 +0000 UTC m=+3380.182980812" watchObservedRunningTime="2026-01-07 04:28:53.627595644 +0000 UTC m=+3380.193290389" Jan 07 04:28:58 crc kubenswrapper[4980]: I0107 04:28:58.493689 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h7hhp" Jan 07 04:28:58 crc kubenswrapper[4980]: I0107 04:28:58.494356 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h7hhp" Jan 07 04:28:59 crc kubenswrapper[4980]: I0107 04:28:59.562365 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h7hhp" podUID="7ad1f947-8786-421c-a703-261e6bf4d838" containerName="registry-server" probeResult="failure" output=< Jan 07 04:28:59 crc kubenswrapper[4980]: timeout: failed to connect service ":50051" within 1s Jan 07 04:28:59 crc kubenswrapper[4980]: > Jan 07 04:29:06 crc kubenswrapper[4980]: I0107 04:29:06.543081 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:29:06 crc kubenswrapper[4980]: I0107 04:29:06.544826 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:29:08 crc kubenswrapper[4980]: I0107 04:29:08.579901 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h7hhp" Jan 07 04:29:08 crc kubenswrapper[4980]: I0107 04:29:08.665189 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h7hhp" Jan 07 04:29:08 crc kubenswrapper[4980]: I0107 04:29:08.833006 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h7hhp"] Jan 07 04:29:09 crc kubenswrapper[4980]: I0107 04:29:09.760989 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h7hhp" podUID="7ad1f947-8786-421c-a703-261e6bf4d838" containerName="registry-server" containerID="cri-o://1539d699b6b03a0ace6061af88905636a54df76ea47d2a11d732ef08b57ee816" gracePeriod=2 Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.327337 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h7hhp" Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.453709 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qpgb\" (UniqueName: \"kubernetes.io/projected/7ad1f947-8786-421c-a703-261e6bf4d838-kube-api-access-9qpgb\") pod \"7ad1f947-8786-421c-a703-261e6bf4d838\" (UID: \"7ad1f947-8786-421c-a703-261e6bf4d838\") " Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.453930 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad1f947-8786-421c-a703-261e6bf4d838-utilities\") pod \"7ad1f947-8786-421c-a703-261e6bf4d838\" (UID: \"7ad1f947-8786-421c-a703-261e6bf4d838\") " Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.454209 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad1f947-8786-421c-a703-261e6bf4d838-catalog-content\") pod \"7ad1f947-8786-421c-a703-261e6bf4d838\" (UID: \"7ad1f947-8786-421c-a703-261e6bf4d838\") " Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.454958 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ad1f947-8786-421c-a703-261e6bf4d838-utilities" (OuterVolumeSpecName: "utilities") pod "7ad1f947-8786-421c-a703-261e6bf4d838" (UID: "7ad1f947-8786-421c-a703-261e6bf4d838"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.455121 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad1f947-8786-421c-a703-261e6bf4d838-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.464673 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ad1f947-8786-421c-a703-261e6bf4d838-kube-api-access-9qpgb" (OuterVolumeSpecName: "kube-api-access-9qpgb") pod "7ad1f947-8786-421c-a703-261e6bf4d838" (UID: "7ad1f947-8786-421c-a703-261e6bf4d838"). InnerVolumeSpecName "kube-api-access-9qpgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.558796 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qpgb\" (UniqueName: \"kubernetes.io/projected/7ad1f947-8786-421c-a703-261e6bf4d838-kube-api-access-9qpgb\") on node \"crc\" DevicePath \"\"" Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.615836 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ad1f947-8786-421c-a703-261e6bf4d838-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ad1f947-8786-421c-a703-261e6bf4d838" (UID: "7ad1f947-8786-421c-a703-261e6bf4d838"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.661698 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad1f947-8786-421c-a703-261e6bf4d838-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.779346 4980 generic.go:334] "Generic (PLEG): container finished" podID="7ad1f947-8786-421c-a703-261e6bf4d838" containerID="1539d699b6b03a0ace6061af88905636a54df76ea47d2a11d732ef08b57ee816" exitCode=0 Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.779397 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7hhp" event={"ID":"7ad1f947-8786-421c-a703-261e6bf4d838","Type":"ContainerDied","Data":"1539d699b6b03a0ace6061af88905636a54df76ea47d2a11d732ef08b57ee816"} Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.779446 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h7hhp" Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.779477 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7hhp" event={"ID":"7ad1f947-8786-421c-a703-261e6bf4d838","Type":"ContainerDied","Data":"76de9348fddedcd1402c15434f88f489ab1042127b0e409f8d5dae4a7fcbd4a3"} Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.779511 4980 scope.go:117] "RemoveContainer" containerID="1539d699b6b03a0ace6061af88905636a54df76ea47d2a11d732ef08b57ee816" Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.811957 4980 scope.go:117] "RemoveContainer" containerID="8eeb290acf80fd2dc06e509676104900f9b4f1c3e2732eed60dc6ce394b85bdf" Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.840647 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h7hhp"] Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.851158 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h7hhp"] Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.862371 4980 scope.go:117] "RemoveContainer" containerID="81ac6b6bbc6b393ce6354823163a4f9d37739cae5c2f0b5ea1a2c4486bb1906e" Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.892730 4980 scope.go:117] "RemoveContainer" containerID="1539d699b6b03a0ace6061af88905636a54df76ea47d2a11d732ef08b57ee816" Jan 07 04:29:10 crc kubenswrapper[4980]: E0107 04:29:10.893279 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1539d699b6b03a0ace6061af88905636a54df76ea47d2a11d732ef08b57ee816\": container with ID starting with 1539d699b6b03a0ace6061af88905636a54df76ea47d2a11d732ef08b57ee816 not found: ID does not exist" containerID="1539d699b6b03a0ace6061af88905636a54df76ea47d2a11d732ef08b57ee816" Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.893317 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1539d699b6b03a0ace6061af88905636a54df76ea47d2a11d732ef08b57ee816"} err="failed to get container status \"1539d699b6b03a0ace6061af88905636a54df76ea47d2a11d732ef08b57ee816\": rpc error: code = NotFound desc = could not find container \"1539d699b6b03a0ace6061af88905636a54df76ea47d2a11d732ef08b57ee816\": container with ID starting with 1539d699b6b03a0ace6061af88905636a54df76ea47d2a11d732ef08b57ee816 not found: ID does not exist" Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.893340 4980 scope.go:117] "RemoveContainer" containerID="8eeb290acf80fd2dc06e509676104900f9b4f1c3e2732eed60dc6ce394b85bdf" Jan 07 04:29:10 crc kubenswrapper[4980]: E0107 04:29:10.893835 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eeb290acf80fd2dc06e509676104900f9b4f1c3e2732eed60dc6ce394b85bdf\": container with ID starting with 8eeb290acf80fd2dc06e509676104900f9b4f1c3e2732eed60dc6ce394b85bdf not found: ID does not exist" containerID="8eeb290acf80fd2dc06e509676104900f9b4f1c3e2732eed60dc6ce394b85bdf" Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.893874 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eeb290acf80fd2dc06e509676104900f9b4f1c3e2732eed60dc6ce394b85bdf"} err="failed to get container status \"8eeb290acf80fd2dc06e509676104900f9b4f1c3e2732eed60dc6ce394b85bdf\": rpc error: code = NotFound desc = could not find container \"8eeb290acf80fd2dc06e509676104900f9b4f1c3e2732eed60dc6ce394b85bdf\": container with ID starting with 8eeb290acf80fd2dc06e509676104900f9b4f1c3e2732eed60dc6ce394b85bdf not found: ID does not exist" Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.893903 4980 scope.go:117] "RemoveContainer" containerID="81ac6b6bbc6b393ce6354823163a4f9d37739cae5c2f0b5ea1a2c4486bb1906e" Jan 07 04:29:10 crc kubenswrapper[4980]: E0107 04:29:10.894210 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81ac6b6bbc6b393ce6354823163a4f9d37739cae5c2f0b5ea1a2c4486bb1906e\": container with ID starting with 81ac6b6bbc6b393ce6354823163a4f9d37739cae5c2f0b5ea1a2c4486bb1906e not found: ID does not exist" containerID="81ac6b6bbc6b393ce6354823163a4f9d37739cae5c2f0b5ea1a2c4486bb1906e" Jan 07 04:29:10 crc kubenswrapper[4980]: I0107 04:29:10.894239 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81ac6b6bbc6b393ce6354823163a4f9d37739cae5c2f0b5ea1a2c4486bb1906e"} err="failed to get container status \"81ac6b6bbc6b393ce6354823163a4f9d37739cae5c2f0b5ea1a2c4486bb1906e\": rpc error: code = NotFound desc = could not find container \"81ac6b6bbc6b393ce6354823163a4f9d37739cae5c2f0b5ea1a2c4486bb1906e\": container with ID starting with 81ac6b6bbc6b393ce6354823163a4f9d37739cae5c2f0b5ea1a2c4486bb1906e not found: ID does not exist" Jan 07 04:29:11 crc kubenswrapper[4980]: I0107 04:29:11.755091 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ad1f947-8786-421c-a703-261e6bf4d838" path="/var/lib/kubelet/pods/7ad1f947-8786-421c-a703-261e6bf4d838/volumes" Jan 07 04:29:36 crc kubenswrapper[4980]: I0107 04:29:36.543871 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:29:36 crc kubenswrapper[4980]: I0107 04:29:36.544702 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:29:36 crc kubenswrapper[4980]: I0107 04:29:36.544790 4980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 04:29:36 crc kubenswrapper[4980]: I0107 04:29:36.545823 4980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a4e0d362ab2284311b58de890383de28b02d5e7e847e7d4a626a08ea7525bf0f"} pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 04:29:36 crc kubenswrapper[4980]: I0107 04:29:36.545945 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" containerID="cri-o://a4e0d362ab2284311b58de890383de28b02d5e7e847e7d4a626a08ea7525bf0f" gracePeriod=600 Jan 07 04:29:37 crc kubenswrapper[4980]: I0107 04:29:37.092945 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerID="a4e0d362ab2284311b58de890383de28b02d5e7e847e7d4a626a08ea7525bf0f" exitCode=0 Jan 07 04:29:37 crc kubenswrapper[4980]: I0107 04:29:37.092999 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerDied","Data":"a4e0d362ab2284311b58de890383de28b02d5e7e847e7d4a626a08ea7525bf0f"} Jan 07 04:29:37 crc kubenswrapper[4980]: I0107 04:29:37.093326 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3"} Jan 07 04:29:37 crc kubenswrapper[4980]: I0107 04:29:37.093356 4980 scope.go:117] "RemoveContainer" containerID="c758901f7abad4fc3d2f29e57ef5b1b8512a2d9d4da4524594bad93751120be8" Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.191048 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25"] Jan 07 04:30:00 crc kubenswrapper[4980]: E0107 04:30:00.193834 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad1f947-8786-421c-a703-261e6bf4d838" containerName="extract-content" Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.193853 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad1f947-8786-421c-a703-261e6bf4d838" containerName="extract-content" Jan 07 04:30:00 crc kubenswrapper[4980]: E0107 04:30:00.193869 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad1f947-8786-421c-a703-261e6bf4d838" containerName="extract-utilities" Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.193878 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad1f947-8786-421c-a703-261e6bf4d838" containerName="extract-utilities" Jan 07 04:30:00 crc kubenswrapper[4980]: E0107 04:30:00.193909 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad1f947-8786-421c-a703-261e6bf4d838" containerName="registry-server" Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.193917 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad1f947-8786-421c-a703-261e6bf4d838" containerName="registry-server" Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.194141 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ad1f947-8786-421c-a703-261e6bf4d838" containerName="registry-server" Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.194947 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25" Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.205024 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25"] Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.207327 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.207342 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.331612 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/135ee0a2-13e4-4335-abc3-a5e003694fa4-config-volume\") pod \"collect-profiles-29462670-ccc25\" (UID: \"135ee0a2-13e4-4335-abc3-a5e003694fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25" Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.332276 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/135ee0a2-13e4-4335-abc3-a5e003694fa4-secret-volume\") pod \"collect-profiles-29462670-ccc25\" (UID: \"135ee0a2-13e4-4335-abc3-a5e003694fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25" Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.332450 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvkvs\" (UniqueName: \"kubernetes.io/projected/135ee0a2-13e4-4335-abc3-a5e003694fa4-kube-api-access-jvkvs\") pod \"collect-profiles-29462670-ccc25\" (UID: \"135ee0a2-13e4-4335-abc3-a5e003694fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25" Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.434678 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvkvs\" (UniqueName: \"kubernetes.io/projected/135ee0a2-13e4-4335-abc3-a5e003694fa4-kube-api-access-jvkvs\") pod \"collect-profiles-29462670-ccc25\" (UID: \"135ee0a2-13e4-4335-abc3-a5e003694fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25" Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.434835 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/135ee0a2-13e4-4335-abc3-a5e003694fa4-config-volume\") pod \"collect-profiles-29462670-ccc25\" (UID: \"135ee0a2-13e4-4335-abc3-a5e003694fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25" Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.435004 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/135ee0a2-13e4-4335-abc3-a5e003694fa4-secret-volume\") pod \"collect-profiles-29462670-ccc25\" (UID: \"135ee0a2-13e4-4335-abc3-a5e003694fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25" Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.436370 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/135ee0a2-13e4-4335-abc3-a5e003694fa4-config-volume\") pod \"collect-profiles-29462670-ccc25\" (UID: \"135ee0a2-13e4-4335-abc3-a5e003694fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25" Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.441161 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/135ee0a2-13e4-4335-abc3-a5e003694fa4-secret-volume\") pod \"collect-profiles-29462670-ccc25\" (UID: \"135ee0a2-13e4-4335-abc3-a5e003694fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25" Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.468212 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvkvs\" (UniqueName: \"kubernetes.io/projected/135ee0a2-13e4-4335-abc3-a5e003694fa4-kube-api-access-jvkvs\") pod \"collect-profiles-29462670-ccc25\" (UID: \"135ee0a2-13e4-4335-abc3-a5e003694fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25" Jan 07 04:30:00 crc kubenswrapper[4980]: I0107 04:30:00.533355 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25" Jan 07 04:30:01 crc kubenswrapper[4980]: I0107 04:30:01.104223 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25"] Jan 07 04:30:01 crc kubenswrapper[4980]: I0107 04:30:01.358881 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25" event={"ID":"135ee0a2-13e4-4335-abc3-a5e003694fa4","Type":"ContainerStarted","Data":"6c2090486a93872b809d5ddc0edf29ad697f99471ab20cfbb91541c211b0e0e0"} Jan 07 04:30:01 crc kubenswrapper[4980]: I0107 04:30:01.359366 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25" event={"ID":"135ee0a2-13e4-4335-abc3-a5e003694fa4","Type":"ContainerStarted","Data":"a674ea085136fed0fdcda3654128cade16459ef82b543989ab44d3930bac0915"} Jan 07 04:30:01 crc kubenswrapper[4980]: I0107 04:30:01.383416 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25" podStartSLOduration=1.383393327 podStartE2EDuration="1.383393327s" podCreationTimestamp="2026-01-07 04:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 04:30:01.377989781 +0000 UTC m=+3447.943684556" watchObservedRunningTime="2026-01-07 04:30:01.383393327 +0000 UTC m=+3447.949088102" Jan 07 04:30:02 crc kubenswrapper[4980]: I0107 04:30:02.372083 4980 generic.go:334] "Generic (PLEG): container finished" podID="135ee0a2-13e4-4335-abc3-a5e003694fa4" containerID="6c2090486a93872b809d5ddc0edf29ad697f99471ab20cfbb91541c211b0e0e0" exitCode=0 Jan 07 04:30:02 crc kubenswrapper[4980]: I0107 04:30:02.372283 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25" event={"ID":"135ee0a2-13e4-4335-abc3-a5e003694fa4","Type":"ContainerDied","Data":"6c2090486a93872b809d5ddc0edf29ad697f99471ab20cfbb91541c211b0e0e0"} Jan 07 04:30:03 crc kubenswrapper[4980]: I0107 04:30:03.889485 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25" Jan 07 04:30:04 crc kubenswrapper[4980]: I0107 04:30:04.016732 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/135ee0a2-13e4-4335-abc3-a5e003694fa4-config-volume\") pod \"135ee0a2-13e4-4335-abc3-a5e003694fa4\" (UID: \"135ee0a2-13e4-4335-abc3-a5e003694fa4\") " Jan 07 04:30:04 crc kubenswrapper[4980]: I0107 04:30:04.017761 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/135ee0a2-13e4-4335-abc3-a5e003694fa4-config-volume" (OuterVolumeSpecName: "config-volume") pod "135ee0a2-13e4-4335-abc3-a5e003694fa4" (UID: "135ee0a2-13e4-4335-abc3-a5e003694fa4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 04:30:04 crc kubenswrapper[4980]: I0107 04:30:04.017917 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/135ee0a2-13e4-4335-abc3-a5e003694fa4-secret-volume\") pod \"135ee0a2-13e4-4335-abc3-a5e003694fa4\" (UID: \"135ee0a2-13e4-4335-abc3-a5e003694fa4\") " Jan 07 04:30:04 crc kubenswrapper[4980]: I0107 04:30:04.018016 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvkvs\" (UniqueName: \"kubernetes.io/projected/135ee0a2-13e4-4335-abc3-a5e003694fa4-kube-api-access-jvkvs\") pod \"135ee0a2-13e4-4335-abc3-a5e003694fa4\" (UID: \"135ee0a2-13e4-4335-abc3-a5e003694fa4\") " Jan 07 04:30:04 crc kubenswrapper[4980]: I0107 04:30:04.018650 4980 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/135ee0a2-13e4-4335-abc3-a5e003694fa4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 07 04:30:04 crc kubenswrapper[4980]: I0107 04:30:04.025683 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/135ee0a2-13e4-4335-abc3-a5e003694fa4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "135ee0a2-13e4-4335-abc3-a5e003694fa4" (UID: "135ee0a2-13e4-4335-abc3-a5e003694fa4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:30:04 crc kubenswrapper[4980]: I0107 04:30:04.026069 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/135ee0a2-13e4-4335-abc3-a5e003694fa4-kube-api-access-jvkvs" (OuterVolumeSpecName: "kube-api-access-jvkvs") pod "135ee0a2-13e4-4335-abc3-a5e003694fa4" (UID: "135ee0a2-13e4-4335-abc3-a5e003694fa4"). InnerVolumeSpecName "kube-api-access-jvkvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:30:04 crc kubenswrapper[4980]: I0107 04:30:04.119932 4980 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/135ee0a2-13e4-4335-abc3-a5e003694fa4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 07 04:30:04 crc kubenswrapper[4980]: I0107 04:30:04.119991 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvkvs\" (UniqueName: \"kubernetes.io/projected/135ee0a2-13e4-4335-abc3-a5e003694fa4-kube-api-access-jvkvs\") on node \"crc\" DevicePath \"\"" Jan 07 04:30:04 crc kubenswrapper[4980]: I0107 04:30:04.410213 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25" event={"ID":"135ee0a2-13e4-4335-abc3-a5e003694fa4","Type":"ContainerDied","Data":"a674ea085136fed0fdcda3654128cade16459ef82b543989ab44d3930bac0915"} Jan 07 04:30:04 crc kubenswrapper[4980]: I0107 04:30:04.410282 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a674ea085136fed0fdcda3654128cade16459ef82b543989ab44d3930bac0915" Jan 07 04:30:04 crc kubenswrapper[4980]: I0107 04:30:04.410466 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462670-ccc25" Jan 07 04:30:04 crc kubenswrapper[4980]: I0107 04:30:04.496096 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw"] Jan 07 04:30:04 crc kubenswrapper[4980]: I0107 04:30:04.511140 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462625-4xdzw"] Jan 07 04:30:05 crc kubenswrapper[4980]: I0107 04:30:05.754746 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="881208a4-6e38-4c65-8767-6c4d096c565a" path="/var/lib/kubelet/pods/881208a4-6e38-4c65-8767-6c4d096c565a/volumes" Jan 07 04:30:42 crc kubenswrapper[4980]: I0107 04:30:42.626042 4980 scope.go:117] "RemoveContainer" containerID="06dff5a19673f7a6b3acfd8eeb119cbc41484384f8509c33ac8625d40b26251d" Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.379213 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6xjcs"] Jan 07 04:31:24 crc kubenswrapper[4980]: E0107 04:31:24.380944 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135ee0a2-13e4-4335-abc3-a5e003694fa4" containerName="collect-profiles" Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.380973 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="135ee0a2-13e4-4335-abc3-a5e003694fa4" containerName="collect-profiles" Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.381364 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="135ee0a2-13e4-4335-abc3-a5e003694fa4" containerName="collect-profiles" Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.384188 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xjcs" Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.390461 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6xjcs"] Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.491678 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a98e63c-f031-4860-a2f0-8b584cfe8978-utilities\") pod \"community-operators-6xjcs\" (UID: \"1a98e63c-f031-4860-a2f0-8b584cfe8978\") " pod="openshift-marketplace/community-operators-6xjcs" Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.491807 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slr6v\" (UniqueName: \"kubernetes.io/projected/1a98e63c-f031-4860-a2f0-8b584cfe8978-kube-api-access-slr6v\") pod \"community-operators-6xjcs\" (UID: \"1a98e63c-f031-4860-a2f0-8b584cfe8978\") " pod="openshift-marketplace/community-operators-6xjcs" Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.491929 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a98e63c-f031-4860-a2f0-8b584cfe8978-catalog-content\") pod \"community-operators-6xjcs\" (UID: \"1a98e63c-f031-4860-a2f0-8b584cfe8978\") " pod="openshift-marketplace/community-operators-6xjcs" Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.594180 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a98e63c-f031-4860-a2f0-8b584cfe8978-catalog-content\") pod \"community-operators-6xjcs\" (UID: \"1a98e63c-f031-4860-a2f0-8b584cfe8978\") " pod="openshift-marketplace/community-operators-6xjcs" Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.594257 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a98e63c-f031-4860-a2f0-8b584cfe8978-utilities\") pod \"community-operators-6xjcs\" (UID: \"1a98e63c-f031-4860-a2f0-8b584cfe8978\") " pod="openshift-marketplace/community-operators-6xjcs" Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.594351 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slr6v\" (UniqueName: \"kubernetes.io/projected/1a98e63c-f031-4860-a2f0-8b584cfe8978-kube-api-access-slr6v\") pod \"community-operators-6xjcs\" (UID: \"1a98e63c-f031-4860-a2f0-8b584cfe8978\") " pod="openshift-marketplace/community-operators-6xjcs" Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.594817 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a98e63c-f031-4860-a2f0-8b584cfe8978-catalog-content\") pod \"community-operators-6xjcs\" (UID: \"1a98e63c-f031-4860-a2f0-8b584cfe8978\") " pod="openshift-marketplace/community-operators-6xjcs" Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.594973 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a98e63c-f031-4860-a2f0-8b584cfe8978-utilities\") pod \"community-operators-6xjcs\" (UID: \"1a98e63c-f031-4860-a2f0-8b584cfe8978\") " pod="openshift-marketplace/community-operators-6xjcs" Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.619061 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slr6v\" (UniqueName: \"kubernetes.io/projected/1a98e63c-f031-4860-a2f0-8b584cfe8978-kube-api-access-slr6v\") pod \"community-operators-6xjcs\" (UID: \"1a98e63c-f031-4860-a2f0-8b584cfe8978\") " pod="openshift-marketplace/community-operators-6xjcs" Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.757951 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-22j9p"] Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.759679 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xjcs" Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.760635 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22j9p" Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.773594 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-22j9p"] Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.900678 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5kr8\" (UniqueName: \"kubernetes.io/projected/c82a8476-92c3-447b-98df-9af4f3eb968a-kube-api-access-p5kr8\") pod \"certified-operators-22j9p\" (UID: \"c82a8476-92c3-447b-98df-9af4f3eb968a\") " pod="openshift-marketplace/certified-operators-22j9p" Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.900725 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82a8476-92c3-447b-98df-9af4f3eb968a-catalog-content\") pod \"certified-operators-22j9p\" (UID: \"c82a8476-92c3-447b-98df-9af4f3eb968a\") " pod="openshift-marketplace/certified-operators-22j9p" Jan 07 04:31:24 crc kubenswrapper[4980]: I0107 04:31:24.900748 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82a8476-92c3-447b-98df-9af4f3eb968a-utilities\") pod \"certified-operators-22j9p\" (UID: \"c82a8476-92c3-447b-98df-9af4f3eb968a\") " pod="openshift-marketplace/certified-operators-22j9p" Jan 07 04:31:25 crc kubenswrapper[4980]: I0107 04:31:25.004603 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5kr8\" (UniqueName: \"kubernetes.io/projected/c82a8476-92c3-447b-98df-9af4f3eb968a-kube-api-access-p5kr8\") pod \"certified-operators-22j9p\" (UID: \"c82a8476-92c3-447b-98df-9af4f3eb968a\") " pod="openshift-marketplace/certified-operators-22j9p" Jan 07 04:31:25 crc kubenswrapper[4980]: I0107 04:31:25.004651 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82a8476-92c3-447b-98df-9af4f3eb968a-catalog-content\") pod \"certified-operators-22j9p\" (UID: \"c82a8476-92c3-447b-98df-9af4f3eb968a\") " pod="openshift-marketplace/certified-operators-22j9p" Jan 07 04:31:25 crc kubenswrapper[4980]: I0107 04:31:25.004674 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82a8476-92c3-447b-98df-9af4f3eb968a-utilities\") pod \"certified-operators-22j9p\" (UID: \"c82a8476-92c3-447b-98df-9af4f3eb968a\") " pod="openshift-marketplace/certified-operators-22j9p" Jan 07 04:31:25 crc kubenswrapper[4980]: I0107 04:31:25.005205 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82a8476-92c3-447b-98df-9af4f3eb968a-utilities\") pod \"certified-operators-22j9p\" (UID: \"c82a8476-92c3-447b-98df-9af4f3eb968a\") " pod="openshift-marketplace/certified-operators-22j9p" Jan 07 04:31:25 crc kubenswrapper[4980]: I0107 04:31:25.005265 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82a8476-92c3-447b-98df-9af4f3eb968a-catalog-content\") pod \"certified-operators-22j9p\" (UID: \"c82a8476-92c3-447b-98df-9af4f3eb968a\") " pod="openshift-marketplace/certified-operators-22j9p" Jan 07 04:31:25 crc kubenswrapper[4980]: I0107 04:31:25.023745 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5kr8\" (UniqueName: \"kubernetes.io/projected/c82a8476-92c3-447b-98df-9af4f3eb968a-kube-api-access-p5kr8\") pod \"certified-operators-22j9p\" (UID: \"c82a8476-92c3-447b-98df-9af4f3eb968a\") " pod="openshift-marketplace/certified-operators-22j9p" Jan 07 04:31:25 crc kubenswrapper[4980]: I0107 04:31:25.091171 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22j9p" Jan 07 04:31:25 crc kubenswrapper[4980]: I0107 04:31:25.244343 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6xjcs"] Jan 07 04:31:25 crc kubenswrapper[4980]: I0107 04:31:25.452420 4980 generic.go:334] "Generic (PLEG): container finished" podID="1a98e63c-f031-4860-a2f0-8b584cfe8978" containerID="643bbecf5ad9185e237929fcd5c435a956826623594a4b7f80582b69ad205a22" exitCode=0 Jan 07 04:31:25 crc kubenswrapper[4980]: I0107 04:31:25.452622 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xjcs" event={"ID":"1a98e63c-f031-4860-a2f0-8b584cfe8978","Type":"ContainerDied","Data":"643bbecf5ad9185e237929fcd5c435a956826623594a4b7f80582b69ad205a22"} Jan 07 04:31:25 crc kubenswrapper[4980]: I0107 04:31:25.452722 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xjcs" event={"ID":"1a98e63c-f031-4860-a2f0-8b584cfe8978","Type":"ContainerStarted","Data":"80c2ff05fd1cb3395bd3eddc31b7648d046e70e0d201609d4c07d0c6238c990a"} Jan 07 04:31:25 crc kubenswrapper[4980]: I0107 04:31:25.553888 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-22j9p"] Jan 07 04:31:26 crc kubenswrapper[4980]: I0107 04:31:26.465320 4980 generic.go:334] "Generic (PLEG): container finished" podID="c82a8476-92c3-447b-98df-9af4f3eb968a" containerID="93e7566efbf8460e45a4bd67a6f80546fd53ec782ba790a8f16623347aabf2ca" exitCode=0 Jan 07 04:31:26 crc kubenswrapper[4980]: I0107 04:31:26.465500 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22j9p" event={"ID":"c82a8476-92c3-447b-98df-9af4f3eb968a","Type":"ContainerDied","Data":"93e7566efbf8460e45a4bd67a6f80546fd53ec782ba790a8f16623347aabf2ca"} Jan 07 04:31:26 crc kubenswrapper[4980]: I0107 04:31:26.465663 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22j9p" event={"ID":"c82a8476-92c3-447b-98df-9af4f3eb968a","Type":"ContainerStarted","Data":"417e1e5e44686040468448de79f1942907f58956ba1f801d41412e8bfbf440be"} Jan 07 04:31:27 crc kubenswrapper[4980]: I0107 04:31:27.476156 4980 generic.go:334] "Generic (PLEG): container finished" podID="1a98e63c-f031-4860-a2f0-8b584cfe8978" containerID="03f880c070c70bb8cbde56692690c96d43b5048dd7def80417997f9ce854da00" exitCode=0 Jan 07 04:31:27 crc kubenswrapper[4980]: I0107 04:31:27.476237 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xjcs" event={"ID":"1a98e63c-f031-4860-a2f0-8b584cfe8978","Type":"ContainerDied","Data":"03f880c070c70bb8cbde56692690c96d43b5048dd7def80417997f9ce854da00"} Jan 07 04:31:27 crc kubenswrapper[4980]: I0107 04:31:27.479330 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22j9p" event={"ID":"c82a8476-92c3-447b-98df-9af4f3eb968a","Type":"ContainerStarted","Data":"b243e23ad561c7d370e17c302d57625f3fd52aa95e59b70ef2a62e8f74b61ca0"} Jan 07 04:31:28 crc kubenswrapper[4980]: I0107 04:31:28.495348 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xjcs" event={"ID":"1a98e63c-f031-4860-a2f0-8b584cfe8978","Type":"ContainerStarted","Data":"cef2c875bdfd4517b95db539e98bcbd7ea4700be52105d5ac691021f976c9a4d"} Jan 07 04:31:28 crc kubenswrapper[4980]: I0107 04:31:28.499323 4980 generic.go:334] "Generic (PLEG): container finished" podID="c82a8476-92c3-447b-98df-9af4f3eb968a" containerID="b243e23ad561c7d370e17c302d57625f3fd52aa95e59b70ef2a62e8f74b61ca0" exitCode=0 Jan 07 04:31:28 crc kubenswrapper[4980]: I0107 04:31:28.499374 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22j9p" event={"ID":"c82a8476-92c3-447b-98df-9af4f3eb968a","Type":"ContainerDied","Data":"b243e23ad561c7d370e17c302d57625f3fd52aa95e59b70ef2a62e8f74b61ca0"} Jan 07 04:31:28 crc kubenswrapper[4980]: I0107 04:31:28.554247 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6xjcs" podStartSLOduration=2.104578092 podStartE2EDuration="4.554219015s" podCreationTimestamp="2026-01-07 04:31:24 +0000 UTC" firstStartedPulling="2026-01-07 04:31:25.455052283 +0000 UTC m=+3532.020747018" lastFinishedPulling="2026-01-07 04:31:27.904693196 +0000 UTC m=+3534.470387941" observedRunningTime="2026-01-07 04:31:28.528267937 +0000 UTC m=+3535.093962672" watchObservedRunningTime="2026-01-07 04:31:28.554219015 +0000 UTC m=+3535.119913800" Jan 07 04:31:29 crc kubenswrapper[4980]: I0107 04:31:29.512397 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22j9p" event={"ID":"c82a8476-92c3-447b-98df-9af4f3eb968a","Type":"ContainerStarted","Data":"82e16e5ce4d47862bde7e6edf94a5da07e0d79256c2a25c32bdbcd65a9709915"} Jan 07 04:31:29 crc kubenswrapper[4980]: I0107 04:31:29.542274 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-22j9p" podStartSLOduration=3.018344835 podStartE2EDuration="5.542256092s" podCreationTimestamp="2026-01-07 04:31:24 +0000 UTC" firstStartedPulling="2026-01-07 04:31:26.467381536 +0000 UTC m=+3533.033076311" lastFinishedPulling="2026-01-07 04:31:28.991292823 +0000 UTC m=+3535.556987568" observedRunningTime="2026-01-07 04:31:29.534149573 +0000 UTC m=+3536.099844318" watchObservedRunningTime="2026-01-07 04:31:29.542256092 +0000 UTC m=+3536.107950837" Jan 07 04:31:34 crc kubenswrapper[4980]: I0107 04:31:34.760679 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6xjcs" Jan 07 04:31:34 crc kubenswrapper[4980]: I0107 04:31:34.761327 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6xjcs" Jan 07 04:31:34 crc kubenswrapper[4980]: I0107 04:31:34.860321 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6xjcs" Jan 07 04:31:35 crc kubenswrapper[4980]: I0107 04:31:35.092468 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-22j9p" Jan 07 04:31:35 crc kubenswrapper[4980]: I0107 04:31:35.092526 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-22j9p" Jan 07 04:31:35 crc kubenswrapper[4980]: I0107 04:31:35.152829 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-22j9p" Jan 07 04:31:35 crc kubenswrapper[4980]: I0107 04:31:35.669884 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-22j9p" Jan 07 04:31:35 crc kubenswrapper[4980]: I0107 04:31:35.682234 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6xjcs" Jan 07 04:31:36 crc kubenswrapper[4980]: I0107 04:31:36.543359 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:31:36 crc kubenswrapper[4980]: I0107 04:31:36.544044 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:31:38 crc kubenswrapper[4980]: I0107 04:31:38.557541 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6xjcs"] Jan 07 04:31:38 crc kubenswrapper[4980]: I0107 04:31:38.558237 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6xjcs" podUID="1a98e63c-f031-4860-a2f0-8b584cfe8978" containerName="registry-server" containerID="cri-o://cef2c875bdfd4517b95db539e98bcbd7ea4700be52105d5ac691021f976c9a4d" gracePeriod=2 Jan 07 04:31:38 crc kubenswrapper[4980]: I0107 04:31:38.951473 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-22j9p"] Jan 07 04:31:38 crc kubenswrapper[4980]: I0107 04:31:38.951990 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-22j9p" podUID="c82a8476-92c3-447b-98df-9af4f3eb968a" containerName="registry-server" containerID="cri-o://82e16e5ce4d47862bde7e6edf94a5da07e0d79256c2a25c32bdbcd65a9709915" gracePeriod=2 Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.009137 4980 generic.go:334] "Generic (PLEG): container finished" podID="c82a8476-92c3-447b-98df-9af4f3eb968a" containerID="82e16e5ce4d47862bde7e6edf94a5da07e0d79256c2a25c32bdbcd65a9709915" exitCode=0 Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.016508 4980 generic.go:334] "Generic (PLEG): container finished" podID="1a98e63c-f031-4860-a2f0-8b584cfe8978" containerID="cef2c875bdfd4517b95db539e98bcbd7ea4700be52105d5ac691021f976c9a4d" exitCode=0 Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.055626 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22j9p" event={"ID":"c82a8476-92c3-447b-98df-9af4f3eb968a","Type":"ContainerDied","Data":"82e16e5ce4d47862bde7e6edf94a5da07e0d79256c2a25c32bdbcd65a9709915"} Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.055696 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xjcs" event={"ID":"1a98e63c-f031-4860-a2f0-8b584cfe8978","Type":"ContainerDied","Data":"cef2c875bdfd4517b95db539e98bcbd7ea4700be52105d5ac691021f976c9a4d"} Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.565071 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22j9p" Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.570788 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xjcs" Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.581465 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82a8476-92c3-447b-98df-9af4f3eb968a-catalog-content\") pod \"c82a8476-92c3-447b-98df-9af4f3eb968a\" (UID: \"c82a8476-92c3-447b-98df-9af4f3eb968a\") " Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.581649 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slr6v\" (UniqueName: \"kubernetes.io/projected/1a98e63c-f031-4860-a2f0-8b584cfe8978-kube-api-access-slr6v\") pod \"1a98e63c-f031-4860-a2f0-8b584cfe8978\" (UID: \"1a98e63c-f031-4860-a2f0-8b584cfe8978\") " Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.589832 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a98e63c-f031-4860-a2f0-8b584cfe8978-kube-api-access-slr6v" (OuterVolumeSpecName: "kube-api-access-slr6v") pod "1a98e63c-f031-4860-a2f0-8b584cfe8978" (UID: "1a98e63c-f031-4860-a2f0-8b584cfe8978"). InnerVolumeSpecName "kube-api-access-slr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.597724 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5kr8\" (UniqueName: \"kubernetes.io/projected/c82a8476-92c3-447b-98df-9af4f3eb968a-kube-api-access-p5kr8\") pod \"c82a8476-92c3-447b-98df-9af4f3eb968a\" (UID: \"c82a8476-92c3-447b-98df-9af4f3eb968a\") " Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.597909 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a98e63c-f031-4860-a2f0-8b584cfe8978-catalog-content\") pod \"1a98e63c-f031-4860-a2f0-8b584cfe8978\" (UID: \"1a98e63c-f031-4860-a2f0-8b584cfe8978\") " Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.598105 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82a8476-92c3-447b-98df-9af4f3eb968a-utilities\") pod \"c82a8476-92c3-447b-98df-9af4f3eb968a\" (UID: \"c82a8476-92c3-447b-98df-9af4f3eb968a\") " Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.598158 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a98e63c-f031-4860-a2f0-8b584cfe8978-utilities\") pod \"1a98e63c-f031-4860-a2f0-8b584cfe8978\" (UID: \"1a98e63c-f031-4860-a2f0-8b584cfe8978\") " Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.599598 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slr6v\" (UniqueName: \"kubernetes.io/projected/1a98e63c-f031-4860-a2f0-8b584cfe8978-kube-api-access-slr6v\") on node \"crc\" DevicePath \"\"" Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.602030 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c82a8476-92c3-447b-98df-9af4f3eb968a-utilities" (OuterVolumeSpecName: "utilities") pod "c82a8476-92c3-447b-98df-9af4f3eb968a" (UID: "c82a8476-92c3-447b-98df-9af4f3eb968a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.609417 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a98e63c-f031-4860-a2f0-8b584cfe8978-utilities" (OuterVolumeSpecName: "utilities") pod "1a98e63c-f031-4860-a2f0-8b584cfe8978" (UID: "1a98e63c-f031-4860-a2f0-8b584cfe8978"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.633522 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c82a8476-92c3-447b-98df-9af4f3eb968a-kube-api-access-p5kr8" (OuterVolumeSpecName: "kube-api-access-p5kr8") pod "c82a8476-92c3-447b-98df-9af4f3eb968a" (UID: "c82a8476-92c3-447b-98df-9af4f3eb968a"). InnerVolumeSpecName "kube-api-access-p5kr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.668649 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a98e63c-f031-4860-a2f0-8b584cfe8978-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a98e63c-f031-4860-a2f0-8b584cfe8978" (UID: "1a98e63c-f031-4860-a2f0-8b584cfe8978"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.701310 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5kr8\" (UniqueName: \"kubernetes.io/projected/c82a8476-92c3-447b-98df-9af4f3eb968a-kube-api-access-p5kr8\") on node \"crc\" DevicePath \"\"" Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.701347 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a98e63c-f031-4860-a2f0-8b584cfe8978-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.701358 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82a8476-92c3-447b-98df-9af4f3eb968a-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.701367 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a98e63c-f031-4860-a2f0-8b584cfe8978-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.725541 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c82a8476-92c3-447b-98df-9af4f3eb968a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c82a8476-92c3-447b-98df-9af4f3eb968a" (UID: "c82a8476-92c3-447b-98df-9af4f3eb968a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:31:40 crc kubenswrapper[4980]: I0107 04:31:40.803503 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82a8476-92c3-447b-98df-9af4f3eb968a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 04:31:41 crc kubenswrapper[4980]: I0107 04:31:41.035669 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22j9p" event={"ID":"c82a8476-92c3-447b-98df-9af4f3eb968a","Type":"ContainerDied","Data":"417e1e5e44686040468448de79f1942907f58956ba1f801d41412e8bfbf440be"} Jan 07 04:31:41 crc kubenswrapper[4980]: I0107 04:31:41.035736 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22j9p" Jan 07 04:31:41 crc kubenswrapper[4980]: I0107 04:31:41.035766 4980 scope.go:117] "RemoveContainer" containerID="82e16e5ce4d47862bde7e6edf94a5da07e0d79256c2a25c32bdbcd65a9709915" Jan 07 04:31:41 crc kubenswrapper[4980]: I0107 04:31:41.040023 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xjcs" event={"ID":"1a98e63c-f031-4860-a2f0-8b584cfe8978","Type":"ContainerDied","Data":"80c2ff05fd1cb3395bd3eddc31b7648d046e70e0d201609d4c07d0c6238c990a"} Jan 07 04:31:41 crc kubenswrapper[4980]: I0107 04:31:41.040120 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xjcs" Jan 07 04:31:41 crc kubenswrapper[4980]: I0107 04:31:41.065433 4980 scope.go:117] "RemoveContainer" containerID="b243e23ad561c7d370e17c302d57625f3fd52aa95e59b70ef2a62e8f74b61ca0" Jan 07 04:31:41 crc kubenswrapper[4980]: I0107 04:31:41.120688 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6xjcs"] Jan 07 04:31:41 crc kubenswrapper[4980]: I0107 04:31:41.132296 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6xjcs"] Jan 07 04:31:41 crc kubenswrapper[4980]: I0107 04:31:41.136274 4980 scope.go:117] "RemoveContainer" containerID="93e7566efbf8460e45a4bd67a6f80546fd53ec782ba790a8f16623347aabf2ca" Jan 07 04:31:41 crc kubenswrapper[4980]: I0107 04:31:41.144119 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-22j9p"] Jan 07 04:31:41 crc kubenswrapper[4980]: I0107 04:31:41.153680 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-22j9p"] Jan 07 04:31:41 crc kubenswrapper[4980]: I0107 04:31:41.165672 4980 scope.go:117] "RemoveContainer" containerID="cef2c875bdfd4517b95db539e98bcbd7ea4700be52105d5ac691021f976c9a4d" Jan 07 04:31:41 crc kubenswrapper[4980]: I0107 04:31:41.208465 4980 scope.go:117] "RemoveContainer" containerID="03f880c070c70bb8cbde56692690c96d43b5048dd7def80417997f9ce854da00" Jan 07 04:31:41 crc kubenswrapper[4980]: I0107 04:31:41.237751 4980 scope.go:117] "RemoveContainer" containerID="643bbecf5ad9185e237929fcd5c435a956826623594a4b7f80582b69ad205a22" Jan 07 04:31:41 crc kubenswrapper[4980]: I0107 04:31:41.755627 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a98e63c-f031-4860-a2f0-8b584cfe8978" path="/var/lib/kubelet/pods/1a98e63c-f031-4860-a2f0-8b584cfe8978/volumes" Jan 07 04:31:41 crc kubenswrapper[4980]: I0107 04:31:41.757181 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c82a8476-92c3-447b-98df-9af4f3eb968a" path="/var/lib/kubelet/pods/c82a8476-92c3-447b-98df-9af4f3eb968a/volumes" Jan 07 04:32:06 crc kubenswrapper[4980]: I0107 04:32:06.543514 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:32:06 crc kubenswrapper[4980]: I0107 04:32:06.544243 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:32:07 crc kubenswrapper[4980]: I0107 04:32:07.337779 4980 generic.go:334] "Generic (PLEG): container finished" podID="4d0a99f6-8fa8-40a7-b994-16a2e287c6ee" containerID="0a1afb61d45c489e5a5771fbd1cdb94e8833af0092c7953f0dce0ae3ae5d925f" exitCode=0 Jan 07 04:32:07 crc kubenswrapper[4980]: I0107 04:32:07.338175 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee","Type":"ContainerDied","Data":"0a1afb61d45c489e5a5771fbd1cdb94e8833af0092c7953f0dce0ae3ae5d925f"} Jan 07 04:32:08 crc kubenswrapper[4980]: I0107 04:32:08.790687 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 07 04:32:08 crc kubenswrapper[4980]: I0107 04:32:08.918896 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-ca-certs\") pod \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " Jan 07 04:32:08 crc kubenswrapper[4980]: I0107 04:32:08.918974 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-config-data\") pod \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " Jan 07 04:32:08 crc kubenswrapper[4980]: I0107 04:32:08.919001 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " Jan 07 04:32:08 crc kubenswrapper[4980]: I0107 04:32:08.919047 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptl6f\" (UniqueName: \"kubernetes.io/projected/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-kube-api-access-ptl6f\") pod \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " Jan 07 04:32:08 crc kubenswrapper[4980]: I0107 04:32:08.919071 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-openstack-config\") pod \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " Jan 07 04:32:08 crc kubenswrapper[4980]: I0107 04:32:08.919108 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-test-operator-ephemeral-workdir\") pod \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " Jan 07 04:32:08 crc kubenswrapper[4980]: I0107 04:32:08.919147 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-test-operator-ephemeral-temporary\") pod \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " Jan 07 04:32:08 crc kubenswrapper[4980]: I0107 04:32:08.919242 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-openstack-config-secret\") pod \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " Jan 07 04:32:08 crc kubenswrapper[4980]: I0107 04:32:08.919265 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-ssh-key\") pod \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\" (UID: \"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee\") " Jan 07 04:32:08 crc kubenswrapper[4980]: I0107 04:32:08.920673 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "4d0a99f6-8fa8-40a7-b994-16a2e287c6ee" (UID: "4d0a99f6-8fa8-40a7-b994-16a2e287c6ee"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:32:08 crc kubenswrapper[4980]: I0107 04:32:08.920867 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-config-data" (OuterVolumeSpecName: "config-data") pod "4d0a99f6-8fa8-40a7-b994-16a2e287c6ee" (UID: "4d0a99f6-8fa8-40a7-b994-16a2e287c6ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 04:32:08 crc kubenswrapper[4980]: I0107 04:32:08.925460 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "4d0a99f6-8fa8-40a7-b994-16a2e287c6ee" (UID: "4d0a99f6-8fa8-40a7-b994-16a2e287c6ee"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:32:08 crc kubenswrapper[4980]: I0107 04:32:08.928958 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-kube-api-access-ptl6f" (OuterVolumeSpecName: "kube-api-access-ptl6f") pod "4d0a99f6-8fa8-40a7-b994-16a2e287c6ee" (UID: "4d0a99f6-8fa8-40a7-b994-16a2e287c6ee"). InnerVolumeSpecName "kube-api-access-ptl6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:32:08 crc kubenswrapper[4980]: I0107 04:32:08.934802 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "test-operator-logs") pod "4d0a99f6-8fa8-40a7-b994-16a2e287c6ee" (UID: "4d0a99f6-8fa8-40a7-b994-16a2e287c6ee"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 07 04:32:08 crc kubenswrapper[4980]: I0107 04:32:08.956525 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "4d0a99f6-8fa8-40a7-b994-16a2e287c6ee" (UID: "4d0a99f6-8fa8-40a7-b994-16a2e287c6ee"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:32:08 crc kubenswrapper[4980]: I0107 04:32:08.963738 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4d0a99f6-8fa8-40a7-b994-16a2e287c6ee" (UID: "4d0a99f6-8fa8-40a7-b994-16a2e287c6ee"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:32:08 crc kubenswrapper[4980]: I0107 04:32:08.966630 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "4d0a99f6-8fa8-40a7-b994-16a2e287c6ee" (UID: "4d0a99f6-8fa8-40a7-b994-16a2e287c6ee"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:32:09 crc kubenswrapper[4980]: I0107 04:32:09.002900 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "4d0a99f6-8fa8-40a7-b994-16a2e287c6ee" (UID: "4d0a99f6-8fa8-40a7-b994-16a2e287c6ee"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 04:32:09 crc kubenswrapper[4980]: I0107 04:32:09.022123 4980 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 07 04:32:09 crc kubenswrapper[4980]: I0107 04:32:09.022413 4980 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 07 04:32:09 crc kubenswrapper[4980]: I0107 04:32:09.022714 4980 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 07 04:32:09 crc kubenswrapper[4980]: I0107 04:32:09.022907 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptl6f\" (UniqueName: \"kubernetes.io/projected/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-kube-api-access-ptl6f\") on node \"crc\" DevicePath \"\"" Jan 07 04:32:09 crc kubenswrapper[4980]: I0107 04:32:09.023078 4980 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 07 04:32:09 crc kubenswrapper[4980]: I0107 04:32:09.023211 4980 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 07 04:32:09 crc kubenswrapper[4980]: I0107 04:32:09.023335 4980 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 07 04:32:09 crc kubenswrapper[4980]: I0107 04:32:09.023524 4980 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 07 04:32:09 crc kubenswrapper[4980]: I0107 04:32:09.023691 4980 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4d0a99f6-8fa8-40a7-b994-16a2e287c6ee-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 07 04:32:09 crc kubenswrapper[4980]: I0107 04:32:09.064631 4980 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 07 04:32:09 crc kubenswrapper[4980]: I0107 04:32:09.125739 4980 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 07 04:32:09 crc kubenswrapper[4980]: I0107 04:32:09.370310 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4d0a99f6-8fa8-40a7-b994-16a2e287c6ee","Type":"ContainerDied","Data":"e692e9b993665ac4192823d9769acf1227339a4d431e121330bccf9830743307"} Jan 07 04:32:09 crc kubenswrapper[4980]: I0107 04:32:09.370369 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e692e9b993665ac4192823d9769acf1227339a4d431e121330bccf9830743307" Jan 07 04:32:09 crc kubenswrapper[4980]: I0107 04:32:09.370900 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.288864 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 07 04:32:11 crc kubenswrapper[4980]: E0107 04:32:11.291208 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82a8476-92c3-447b-98df-9af4f3eb968a" containerName="extract-content" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.291397 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82a8476-92c3-447b-98df-9af4f3eb968a" containerName="extract-content" Jan 07 04:32:11 crc kubenswrapper[4980]: E0107 04:32:11.291603 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d0a99f6-8fa8-40a7-b994-16a2e287c6ee" containerName="tempest-tests-tempest-tests-runner" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.291838 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d0a99f6-8fa8-40a7-b994-16a2e287c6ee" containerName="tempest-tests-tempest-tests-runner" Jan 07 04:32:11 crc kubenswrapper[4980]: E0107 04:32:11.292038 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82a8476-92c3-447b-98df-9af4f3eb968a" containerName="extract-utilities" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.292206 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82a8476-92c3-447b-98df-9af4f3eb968a" containerName="extract-utilities" Jan 07 04:32:11 crc kubenswrapper[4980]: E0107 04:32:11.292348 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82a8476-92c3-447b-98df-9af4f3eb968a" containerName="registry-server" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.292489 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82a8476-92c3-447b-98df-9af4f3eb968a" containerName="registry-server" Jan 07 04:32:11 crc kubenswrapper[4980]: E0107 04:32:11.292643 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a98e63c-f031-4860-a2f0-8b584cfe8978" containerName="extract-utilities" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.292757 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a98e63c-f031-4860-a2f0-8b584cfe8978" containerName="extract-utilities" Jan 07 04:32:11 crc kubenswrapper[4980]: E0107 04:32:11.292963 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a98e63c-f031-4860-a2f0-8b584cfe8978" containerName="extract-content" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.293120 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a98e63c-f031-4860-a2f0-8b584cfe8978" containerName="extract-content" Jan 07 04:32:11 crc kubenswrapper[4980]: E0107 04:32:11.293502 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a98e63c-f031-4860-a2f0-8b584cfe8978" containerName="registry-server" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.293814 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a98e63c-f031-4860-a2f0-8b584cfe8978" containerName="registry-server" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.295052 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d0a99f6-8fa8-40a7-b994-16a2e287c6ee" containerName="tempest-tests-tempest-tests-runner" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.295249 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a98e63c-f031-4860-a2f0-8b584cfe8978" containerName="registry-server" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.295382 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c82a8476-92c3-447b-98df-9af4f3eb968a" containerName="registry-server" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.296706 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.301653 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-gpw7m" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.303689 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.483714 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d8eaae2d-e134-40ee-b33a-51a04571798a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.483805 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thhws\" (UniqueName: \"kubernetes.io/projected/d8eaae2d-e134-40ee-b33a-51a04571798a-kube-api-access-thhws\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d8eaae2d-e134-40ee-b33a-51a04571798a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.586636 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d8eaae2d-e134-40ee-b33a-51a04571798a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.586760 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thhws\" (UniqueName: \"kubernetes.io/projected/d8eaae2d-e134-40ee-b33a-51a04571798a-kube-api-access-thhws\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d8eaae2d-e134-40ee-b33a-51a04571798a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.587339 4980 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d8eaae2d-e134-40ee-b33a-51a04571798a\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.626151 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thhws\" (UniqueName: \"kubernetes.io/projected/d8eaae2d-e134-40ee-b33a-51a04571798a-kube-api-access-thhws\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d8eaae2d-e134-40ee-b33a-51a04571798a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.628303 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d8eaae2d-e134-40ee-b33a-51a04571798a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 07 04:32:11 crc kubenswrapper[4980]: I0107 04:32:11.931241 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 07 04:32:12 crc kubenswrapper[4980]: I0107 04:32:12.456299 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 07 04:32:12 crc kubenswrapper[4980]: I0107 04:32:12.459955 4980 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 04:32:13 crc kubenswrapper[4980]: I0107 04:32:13.428752 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d8eaae2d-e134-40ee-b33a-51a04571798a","Type":"ContainerStarted","Data":"fed852d769dc83c9c5aaeb22541f85d51d0906cc99e9fbfa44450cd8b4c2e0cc"} Jan 07 04:32:14 crc kubenswrapper[4980]: I0107 04:32:14.446146 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d8eaae2d-e134-40ee-b33a-51a04571798a","Type":"ContainerStarted","Data":"6d5c775f656bc98655b4237ded392bd9c1fbae42530b673acd7d7ae385390d34"} Jan 07 04:32:14 crc kubenswrapper[4980]: I0107 04:32:14.478332 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.142826606 podStartE2EDuration="3.47830588s" podCreationTimestamp="2026-01-07 04:32:11 +0000 UTC" firstStartedPulling="2026-01-07 04:32:12.459409397 +0000 UTC m=+3579.025104172" lastFinishedPulling="2026-01-07 04:32:13.794888711 +0000 UTC m=+3580.360583446" observedRunningTime="2026-01-07 04:32:14.469088456 +0000 UTC m=+3581.034783221" watchObservedRunningTime="2026-01-07 04:32:14.47830588 +0000 UTC m=+3581.044000625" Jan 07 04:32:36 crc kubenswrapper[4980]: I0107 04:32:36.542898 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:32:36 crc kubenswrapper[4980]: I0107 04:32:36.543710 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:32:36 crc kubenswrapper[4980]: I0107 04:32:36.543781 4980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 04:32:36 crc kubenswrapper[4980]: I0107 04:32:36.545109 4980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3"} pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 04:32:36 crc kubenswrapper[4980]: I0107 04:32:36.545236 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" containerID="cri-o://8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" gracePeriod=600 Jan 07 04:32:36 crc kubenswrapper[4980]: E0107 04:32:36.680627 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:32:36 crc kubenswrapper[4980]: I0107 04:32:36.771011 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" exitCode=0 Jan 07 04:32:36 crc kubenswrapper[4980]: I0107 04:32:36.771063 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerDied","Data":"8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3"} Jan 07 04:32:36 crc kubenswrapper[4980]: I0107 04:32:36.771103 4980 scope.go:117] "RemoveContainer" containerID="a4e0d362ab2284311b58de890383de28b02d5e7e847e7d4a626a08ea7525bf0f" Jan 07 04:32:36 crc kubenswrapper[4980]: I0107 04:32:36.772089 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:32:36 crc kubenswrapper[4980]: E0107 04:32:36.772609 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:32:36 crc kubenswrapper[4980]: I0107 04:32:36.934567 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ql5tl/must-gather-786sv"] Jan 07 04:32:36 crc kubenswrapper[4980]: I0107 04:32:36.936785 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ql5tl/must-gather-786sv" Jan 07 04:32:36 crc kubenswrapper[4980]: I0107 04:32:36.939029 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ql5tl"/"default-dockercfg-qn6tw" Jan 07 04:32:36 crc kubenswrapper[4980]: I0107 04:32:36.939316 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ql5tl"/"kube-root-ca.crt" Jan 07 04:32:36 crc kubenswrapper[4980]: I0107 04:32:36.942924 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ql5tl"/"openshift-service-ca.crt" Jan 07 04:32:36 crc kubenswrapper[4980]: I0107 04:32:36.944180 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ql5tl/must-gather-786sv"] Jan 07 04:32:37 crc kubenswrapper[4980]: I0107 04:32:37.086342 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9627588f-8915-4b95-b0df-ef9e7abb9009-must-gather-output\") pod \"must-gather-786sv\" (UID: \"9627588f-8915-4b95-b0df-ef9e7abb9009\") " pod="openshift-must-gather-ql5tl/must-gather-786sv" Jan 07 04:32:37 crc kubenswrapper[4980]: I0107 04:32:37.086525 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4z2x\" (UniqueName: \"kubernetes.io/projected/9627588f-8915-4b95-b0df-ef9e7abb9009-kube-api-access-n4z2x\") pod \"must-gather-786sv\" (UID: \"9627588f-8915-4b95-b0df-ef9e7abb9009\") " pod="openshift-must-gather-ql5tl/must-gather-786sv" Jan 07 04:32:37 crc kubenswrapper[4980]: I0107 04:32:37.188115 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9627588f-8915-4b95-b0df-ef9e7abb9009-must-gather-output\") pod \"must-gather-786sv\" (UID: \"9627588f-8915-4b95-b0df-ef9e7abb9009\") " pod="openshift-must-gather-ql5tl/must-gather-786sv" Jan 07 04:32:37 crc kubenswrapper[4980]: I0107 04:32:37.188179 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4z2x\" (UniqueName: \"kubernetes.io/projected/9627588f-8915-4b95-b0df-ef9e7abb9009-kube-api-access-n4z2x\") pod \"must-gather-786sv\" (UID: \"9627588f-8915-4b95-b0df-ef9e7abb9009\") " pod="openshift-must-gather-ql5tl/must-gather-786sv" Jan 07 04:32:37 crc kubenswrapper[4980]: I0107 04:32:37.188699 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9627588f-8915-4b95-b0df-ef9e7abb9009-must-gather-output\") pod \"must-gather-786sv\" (UID: \"9627588f-8915-4b95-b0df-ef9e7abb9009\") " pod="openshift-must-gather-ql5tl/must-gather-786sv" Jan 07 04:32:37 crc kubenswrapper[4980]: I0107 04:32:37.220606 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4z2x\" (UniqueName: \"kubernetes.io/projected/9627588f-8915-4b95-b0df-ef9e7abb9009-kube-api-access-n4z2x\") pod \"must-gather-786sv\" (UID: \"9627588f-8915-4b95-b0df-ef9e7abb9009\") " pod="openshift-must-gather-ql5tl/must-gather-786sv" Jan 07 04:32:37 crc kubenswrapper[4980]: I0107 04:32:37.258170 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ql5tl/must-gather-786sv" Jan 07 04:32:37 crc kubenswrapper[4980]: I0107 04:32:37.698698 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ql5tl/must-gather-786sv"] Jan 07 04:32:37 crc kubenswrapper[4980]: I0107 04:32:37.796696 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ql5tl/must-gather-786sv" event={"ID":"9627588f-8915-4b95-b0df-ef9e7abb9009","Type":"ContainerStarted","Data":"1a813aa5a31b69046fce9f26ba15b0050c9e10d4c1db9df460a0b70b562e72ba"} Jan 07 04:32:44 crc kubenswrapper[4980]: I0107 04:32:44.880890 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ql5tl/must-gather-786sv" event={"ID":"9627588f-8915-4b95-b0df-ef9e7abb9009","Type":"ContainerStarted","Data":"491354541b664ff9f4a1308d7a7840cab7e9f815366aa67b7d7b9628dae8fd26"} Jan 07 04:32:44 crc kubenswrapper[4980]: I0107 04:32:44.881545 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ql5tl/must-gather-786sv" event={"ID":"9627588f-8915-4b95-b0df-ef9e7abb9009","Type":"ContainerStarted","Data":"3e9a9f3ce8302a04b80596ce13f668a8b7e670fdb9e9ae871bc285fcd3f4fdee"} Jan 07 04:32:44 crc kubenswrapper[4980]: I0107 04:32:44.911080 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ql5tl/must-gather-786sv" podStartSLOduration=2.868284878 podStartE2EDuration="8.911045057s" podCreationTimestamp="2026-01-07 04:32:36 +0000 UTC" firstStartedPulling="2026-01-07 04:32:37.705275528 +0000 UTC m=+3604.270970263" lastFinishedPulling="2026-01-07 04:32:43.748035687 +0000 UTC m=+3610.313730442" observedRunningTime="2026-01-07 04:32:44.903904857 +0000 UTC m=+3611.469599592" watchObservedRunningTime="2026-01-07 04:32:44.911045057 +0000 UTC m=+3611.476739842" Jan 07 04:32:47 crc kubenswrapper[4980]: I0107 04:32:47.736673 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:32:47 crc kubenswrapper[4980]: E0107 04:32:47.746387 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:32:47 crc kubenswrapper[4980]: I0107 04:32:47.780292 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ql5tl/crc-debug-72nt7"] Jan 07 04:32:47 crc kubenswrapper[4980]: I0107 04:32:47.781610 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ql5tl/crc-debug-72nt7" Jan 07 04:32:47 crc kubenswrapper[4980]: I0107 04:32:47.932967 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79qdz\" (UniqueName: \"kubernetes.io/projected/b9b33184-2040-4c9e-886b-818eb33c6408-kube-api-access-79qdz\") pod \"crc-debug-72nt7\" (UID: \"b9b33184-2040-4c9e-886b-818eb33c6408\") " pod="openshift-must-gather-ql5tl/crc-debug-72nt7" Jan 07 04:32:47 crc kubenswrapper[4980]: I0107 04:32:47.933370 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9b33184-2040-4c9e-886b-818eb33c6408-host\") pod \"crc-debug-72nt7\" (UID: \"b9b33184-2040-4c9e-886b-818eb33c6408\") " pod="openshift-must-gather-ql5tl/crc-debug-72nt7" Jan 07 04:32:48 crc kubenswrapper[4980]: I0107 04:32:48.034525 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79qdz\" (UniqueName: \"kubernetes.io/projected/b9b33184-2040-4c9e-886b-818eb33c6408-kube-api-access-79qdz\") pod \"crc-debug-72nt7\" (UID: \"b9b33184-2040-4c9e-886b-818eb33c6408\") " pod="openshift-must-gather-ql5tl/crc-debug-72nt7" Jan 07 04:32:48 crc kubenswrapper[4980]: I0107 04:32:48.034683 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9b33184-2040-4c9e-886b-818eb33c6408-host\") pod \"crc-debug-72nt7\" (UID: \"b9b33184-2040-4c9e-886b-818eb33c6408\") " pod="openshift-must-gather-ql5tl/crc-debug-72nt7" Jan 07 04:32:48 crc kubenswrapper[4980]: I0107 04:32:48.034785 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9b33184-2040-4c9e-886b-818eb33c6408-host\") pod \"crc-debug-72nt7\" (UID: \"b9b33184-2040-4c9e-886b-818eb33c6408\") " pod="openshift-must-gather-ql5tl/crc-debug-72nt7" Jan 07 04:32:48 crc kubenswrapper[4980]: I0107 04:32:48.054463 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79qdz\" (UniqueName: \"kubernetes.io/projected/b9b33184-2040-4c9e-886b-818eb33c6408-kube-api-access-79qdz\") pod \"crc-debug-72nt7\" (UID: \"b9b33184-2040-4c9e-886b-818eb33c6408\") " pod="openshift-must-gather-ql5tl/crc-debug-72nt7" Jan 07 04:32:48 crc kubenswrapper[4980]: I0107 04:32:48.167966 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ql5tl/crc-debug-72nt7" Jan 07 04:32:48 crc kubenswrapper[4980]: I0107 04:32:48.926904 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ql5tl/crc-debug-72nt7" event={"ID":"b9b33184-2040-4c9e-886b-818eb33c6408","Type":"ContainerStarted","Data":"96e3beec0346d01ebba2b1b5f82e7c2a4cc59c1137fd5a0a521d9889f959a339"} Jan 07 04:32:58 crc kubenswrapper[4980]: I0107 04:32:58.527732 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-24j8g" podUID="3f131e38-245d-400d-8a7b-f9c7dc486db8" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 07 04:33:00 crc kubenswrapper[4980]: I0107 04:33:00.736440 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:33:00 crc kubenswrapper[4980]: E0107 04:33:00.737364 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:33:03 crc kubenswrapper[4980]: I0107 04:33:03.098892 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ql5tl/crc-debug-72nt7" event={"ID":"b9b33184-2040-4c9e-886b-818eb33c6408","Type":"ContainerStarted","Data":"f4fa38a92024239ce715f705f3d3a5a490f2ed7a8246469322dd4b55f2078ad4"} Jan 07 04:33:03 crc kubenswrapper[4980]: I0107 04:33:03.118784 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ql5tl/crc-debug-72nt7" podStartSLOduration=2.321259735 podStartE2EDuration="16.118763886s" podCreationTimestamp="2026-01-07 04:32:47 +0000 UTC" firstStartedPulling="2026-01-07 04:32:48.199251567 +0000 UTC m=+3614.764946302" lastFinishedPulling="2026-01-07 04:33:01.996755718 +0000 UTC m=+3628.562450453" observedRunningTime="2026-01-07 04:33:03.113310478 +0000 UTC m=+3629.679005213" watchObservedRunningTime="2026-01-07 04:33:03.118763886 +0000 UTC m=+3629.684458621" Jan 07 04:33:14 crc kubenswrapper[4980]: I0107 04:33:14.736041 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:33:14 crc kubenswrapper[4980]: E0107 04:33:14.736737 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:33:27 crc kubenswrapper[4980]: I0107 04:33:27.736784 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:33:27 crc kubenswrapper[4980]: E0107 04:33:27.737596 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:33:40 crc kubenswrapper[4980]: I0107 04:33:40.736373 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:33:40 crc kubenswrapper[4980]: E0107 04:33:40.737051 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:33:41 crc kubenswrapper[4980]: I0107 04:33:41.454675 4980 generic.go:334] "Generic (PLEG): container finished" podID="b9b33184-2040-4c9e-886b-818eb33c6408" containerID="f4fa38a92024239ce715f705f3d3a5a490f2ed7a8246469322dd4b55f2078ad4" exitCode=0 Jan 07 04:33:41 crc kubenswrapper[4980]: I0107 04:33:41.454743 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ql5tl/crc-debug-72nt7" event={"ID":"b9b33184-2040-4c9e-886b-818eb33c6408","Type":"ContainerDied","Data":"f4fa38a92024239ce715f705f3d3a5a490f2ed7a8246469322dd4b55f2078ad4"} Jan 07 04:33:42 crc kubenswrapper[4980]: I0107 04:33:42.587526 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ql5tl/crc-debug-72nt7" Jan 07 04:33:42 crc kubenswrapper[4980]: I0107 04:33:42.628117 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ql5tl/crc-debug-72nt7"] Jan 07 04:33:42 crc kubenswrapper[4980]: I0107 04:33:42.640389 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ql5tl/crc-debug-72nt7"] Jan 07 04:33:42 crc kubenswrapper[4980]: I0107 04:33:42.687703 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9b33184-2040-4c9e-886b-818eb33c6408-host\") pod \"b9b33184-2040-4c9e-886b-818eb33c6408\" (UID: \"b9b33184-2040-4c9e-886b-818eb33c6408\") " Jan 07 04:33:42 crc kubenswrapper[4980]: I0107 04:33:42.687803 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b33184-2040-4c9e-886b-818eb33c6408-host" (OuterVolumeSpecName: "host") pod "b9b33184-2040-4c9e-886b-818eb33c6408" (UID: "b9b33184-2040-4c9e-886b-818eb33c6408"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 04:33:42 crc kubenswrapper[4980]: I0107 04:33:42.687859 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79qdz\" (UniqueName: \"kubernetes.io/projected/b9b33184-2040-4c9e-886b-818eb33c6408-kube-api-access-79qdz\") pod \"b9b33184-2040-4c9e-886b-818eb33c6408\" (UID: \"b9b33184-2040-4c9e-886b-818eb33c6408\") " Jan 07 04:33:42 crc kubenswrapper[4980]: I0107 04:33:42.688243 4980 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9b33184-2040-4c9e-886b-818eb33c6408-host\") on node \"crc\" DevicePath \"\"" Jan 07 04:33:42 crc kubenswrapper[4980]: I0107 04:33:42.694813 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b33184-2040-4c9e-886b-818eb33c6408-kube-api-access-79qdz" (OuterVolumeSpecName: "kube-api-access-79qdz") pod "b9b33184-2040-4c9e-886b-818eb33c6408" (UID: "b9b33184-2040-4c9e-886b-818eb33c6408"). InnerVolumeSpecName "kube-api-access-79qdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:33:42 crc kubenswrapper[4980]: I0107 04:33:42.792352 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79qdz\" (UniqueName: \"kubernetes.io/projected/b9b33184-2040-4c9e-886b-818eb33c6408-kube-api-access-79qdz\") on node \"crc\" DevicePath \"\"" Jan 07 04:33:43 crc kubenswrapper[4980]: I0107 04:33:43.479749 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96e3beec0346d01ebba2b1b5f82e7c2a4cc59c1137fd5a0a521d9889f959a339" Jan 07 04:33:43 crc kubenswrapper[4980]: I0107 04:33:43.479813 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ql5tl/crc-debug-72nt7" Jan 07 04:33:43 crc kubenswrapper[4980]: I0107 04:33:43.758702 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9b33184-2040-4c9e-886b-818eb33c6408" path="/var/lib/kubelet/pods/b9b33184-2040-4c9e-886b-818eb33c6408/volumes" Jan 07 04:33:43 crc kubenswrapper[4980]: I0107 04:33:43.827473 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ql5tl/crc-debug-f74rr"] Jan 07 04:33:43 crc kubenswrapper[4980]: E0107 04:33:43.827964 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b33184-2040-4c9e-886b-818eb33c6408" containerName="container-00" Jan 07 04:33:43 crc kubenswrapper[4980]: I0107 04:33:43.827986 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b33184-2040-4c9e-886b-818eb33c6408" containerName="container-00" Jan 07 04:33:43 crc kubenswrapper[4980]: I0107 04:33:43.828236 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b33184-2040-4c9e-886b-818eb33c6408" containerName="container-00" Jan 07 04:33:43 crc kubenswrapper[4980]: I0107 04:33:43.828953 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ql5tl/crc-debug-f74rr" Jan 07 04:33:43 crc kubenswrapper[4980]: I0107 04:33:43.915963 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccdp8\" (UniqueName: \"kubernetes.io/projected/f66e89bd-5d05-45bd-b8e3-15e9cd9d84be-kube-api-access-ccdp8\") pod \"crc-debug-f74rr\" (UID: \"f66e89bd-5d05-45bd-b8e3-15e9cd9d84be\") " pod="openshift-must-gather-ql5tl/crc-debug-f74rr" Jan 07 04:33:43 crc kubenswrapper[4980]: I0107 04:33:43.916240 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f66e89bd-5d05-45bd-b8e3-15e9cd9d84be-host\") pod \"crc-debug-f74rr\" (UID: \"f66e89bd-5d05-45bd-b8e3-15e9cd9d84be\") " pod="openshift-must-gather-ql5tl/crc-debug-f74rr" Jan 07 04:33:44 crc kubenswrapper[4980]: I0107 04:33:44.018123 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccdp8\" (UniqueName: \"kubernetes.io/projected/f66e89bd-5d05-45bd-b8e3-15e9cd9d84be-kube-api-access-ccdp8\") pod \"crc-debug-f74rr\" (UID: \"f66e89bd-5d05-45bd-b8e3-15e9cd9d84be\") " pod="openshift-must-gather-ql5tl/crc-debug-f74rr" Jan 07 04:33:44 crc kubenswrapper[4980]: I0107 04:33:44.018225 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f66e89bd-5d05-45bd-b8e3-15e9cd9d84be-host\") pod \"crc-debug-f74rr\" (UID: \"f66e89bd-5d05-45bd-b8e3-15e9cd9d84be\") " pod="openshift-must-gather-ql5tl/crc-debug-f74rr" Jan 07 04:33:44 crc kubenswrapper[4980]: I0107 04:33:44.018493 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f66e89bd-5d05-45bd-b8e3-15e9cd9d84be-host\") pod \"crc-debug-f74rr\" (UID: \"f66e89bd-5d05-45bd-b8e3-15e9cd9d84be\") " pod="openshift-must-gather-ql5tl/crc-debug-f74rr" Jan 07 04:33:44 crc kubenswrapper[4980]: I0107 04:33:44.040841 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccdp8\" (UniqueName: \"kubernetes.io/projected/f66e89bd-5d05-45bd-b8e3-15e9cd9d84be-kube-api-access-ccdp8\") pod \"crc-debug-f74rr\" (UID: \"f66e89bd-5d05-45bd-b8e3-15e9cd9d84be\") " pod="openshift-must-gather-ql5tl/crc-debug-f74rr" Jan 07 04:33:44 crc kubenswrapper[4980]: I0107 04:33:44.169281 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ql5tl/crc-debug-f74rr" Jan 07 04:33:44 crc kubenswrapper[4980]: I0107 04:33:44.496680 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ql5tl/crc-debug-f74rr" event={"ID":"f66e89bd-5d05-45bd-b8e3-15e9cd9d84be","Type":"ContainerStarted","Data":"ca4aeb6d6a458cb2818e720c1f0cc737f73791fd6a9ff478508b4499969c95e1"} Jan 07 04:33:45 crc kubenswrapper[4980]: I0107 04:33:45.511062 4980 generic.go:334] "Generic (PLEG): container finished" podID="f66e89bd-5d05-45bd-b8e3-15e9cd9d84be" containerID="ba26f10bb2cad80ec1fa0b8cd4f18da245286d9c7264600b1c4737a0c26e09c7" exitCode=0 Jan 07 04:33:45 crc kubenswrapper[4980]: I0107 04:33:45.511162 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ql5tl/crc-debug-f74rr" event={"ID":"f66e89bd-5d05-45bd-b8e3-15e9cd9d84be","Type":"ContainerDied","Data":"ba26f10bb2cad80ec1fa0b8cd4f18da245286d9c7264600b1c4737a0c26e09c7"} Jan 07 04:33:46 crc kubenswrapper[4980]: I0107 04:33:46.079945 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ql5tl/crc-debug-f74rr"] Jan 07 04:33:46 crc kubenswrapper[4980]: I0107 04:33:46.087034 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ql5tl/crc-debug-f74rr"] Jan 07 04:33:46 crc kubenswrapper[4980]: I0107 04:33:46.664502 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ql5tl/crc-debug-f74rr" Jan 07 04:33:46 crc kubenswrapper[4980]: I0107 04:33:46.684459 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccdp8\" (UniqueName: \"kubernetes.io/projected/f66e89bd-5d05-45bd-b8e3-15e9cd9d84be-kube-api-access-ccdp8\") pod \"f66e89bd-5d05-45bd-b8e3-15e9cd9d84be\" (UID: \"f66e89bd-5d05-45bd-b8e3-15e9cd9d84be\") " Jan 07 04:33:46 crc kubenswrapper[4980]: I0107 04:33:46.684592 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f66e89bd-5d05-45bd-b8e3-15e9cd9d84be-host\") pod \"f66e89bd-5d05-45bd-b8e3-15e9cd9d84be\" (UID: \"f66e89bd-5d05-45bd-b8e3-15e9cd9d84be\") " Jan 07 04:33:46 crc kubenswrapper[4980]: I0107 04:33:46.684757 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f66e89bd-5d05-45bd-b8e3-15e9cd9d84be-host" (OuterVolumeSpecName: "host") pod "f66e89bd-5d05-45bd-b8e3-15e9cd9d84be" (UID: "f66e89bd-5d05-45bd-b8e3-15e9cd9d84be"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 04:33:46 crc kubenswrapper[4980]: I0107 04:33:46.685223 4980 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f66e89bd-5d05-45bd-b8e3-15e9cd9d84be-host\") on node \"crc\" DevicePath \"\"" Jan 07 04:33:46 crc kubenswrapper[4980]: I0107 04:33:46.692883 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f66e89bd-5d05-45bd-b8e3-15e9cd9d84be-kube-api-access-ccdp8" (OuterVolumeSpecName: "kube-api-access-ccdp8") pod "f66e89bd-5d05-45bd-b8e3-15e9cd9d84be" (UID: "f66e89bd-5d05-45bd-b8e3-15e9cd9d84be"). InnerVolumeSpecName "kube-api-access-ccdp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:33:46 crc kubenswrapper[4980]: I0107 04:33:46.787094 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccdp8\" (UniqueName: \"kubernetes.io/projected/f66e89bd-5d05-45bd-b8e3-15e9cd9d84be-kube-api-access-ccdp8\") on node \"crc\" DevicePath \"\"" Jan 07 04:33:47 crc kubenswrapper[4980]: I0107 04:33:47.383102 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ql5tl/crc-debug-xjfnb"] Jan 07 04:33:47 crc kubenswrapper[4980]: E0107 04:33:47.383463 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66e89bd-5d05-45bd-b8e3-15e9cd9d84be" containerName="container-00" Jan 07 04:33:47 crc kubenswrapper[4980]: I0107 04:33:47.383475 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66e89bd-5d05-45bd-b8e3-15e9cd9d84be" containerName="container-00" Jan 07 04:33:47 crc kubenswrapper[4980]: I0107 04:33:47.383682 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="f66e89bd-5d05-45bd-b8e3-15e9cd9d84be" containerName="container-00" Jan 07 04:33:47 crc kubenswrapper[4980]: I0107 04:33:47.384278 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ql5tl/crc-debug-xjfnb" Jan 07 04:33:47 crc kubenswrapper[4980]: I0107 04:33:47.399188 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tftpc\" (UniqueName: \"kubernetes.io/projected/0c4d78eb-dd52-42b0-b741-e26927b6f16c-kube-api-access-tftpc\") pod \"crc-debug-xjfnb\" (UID: \"0c4d78eb-dd52-42b0-b741-e26927b6f16c\") " pod="openshift-must-gather-ql5tl/crc-debug-xjfnb" Jan 07 04:33:47 crc kubenswrapper[4980]: I0107 04:33:47.399242 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c4d78eb-dd52-42b0-b741-e26927b6f16c-host\") pod \"crc-debug-xjfnb\" (UID: \"0c4d78eb-dd52-42b0-b741-e26927b6f16c\") " pod="openshift-must-gather-ql5tl/crc-debug-xjfnb" Jan 07 04:33:47 crc kubenswrapper[4980]: I0107 04:33:47.501694 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tftpc\" (UniqueName: \"kubernetes.io/projected/0c4d78eb-dd52-42b0-b741-e26927b6f16c-kube-api-access-tftpc\") pod \"crc-debug-xjfnb\" (UID: \"0c4d78eb-dd52-42b0-b741-e26927b6f16c\") " pod="openshift-must-gather-ql5tl/crc-debug-xjfnb" Jan 07 04:33:47 crc kubenswrapper[4980]: I0107 04:33:47.502038 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c4d78eb-dd52-42b0-b741-e26927b6f16c-host\") pod \"crc-debug-xjfnb\" (UID: \"0c4d78eb-dd52-42b0-b741-e26927b6f16c\") " pod="openshift-must-gather-ql5tl/crc-debug-xjfnb" Jan 07 04:33:47 crc kubenswrapper[4980]: I0107 04:33:47.502130 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c4d78eb-dd52-42b0-b741-e26927b6f16c-host\") pod \"crc-debug-xjfnb\" (UID: \"0c4d78eb-dd52-42b0-b741-e26927b6f16c\") " pod="openshift-must-gather-ql5tl/crc-debug-xjfnb" Jan 07 04:33:47 crc kubenswrapper[4980]: I0107 04:33:47.521442 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tftpc\" (UniqueName: \"kubernetes.io/projected/0c4d78eb-dd52-42b0-b741-e26927b6f16c-kube-api-access-tftpc\") pod \"crc-debug-xjfnb\" (UID: \"0c4d78eb-dd52-42b0-b741-e26927b6f16c\") " pod="openshift-must-gather-ql5tl/crc-debug-xjfnb" Jan 07 04:33:47 crc kubenswrapper[4980]: I0107 04:33:47.544826 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca4aeb6d6a458cb2818e720c1f0cc737f73791fd6a9ff478508b4499969c95e1" Jan 07 04:33:47 crc kubenswrapper[4980]: I0107 04:33:47.544918 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ql5tl/crc-debug-f74rr" Jan 07 04:33:47 crc kubenswrapper[4980]: I0107 04:33:47.703663 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ql5tl/crc-debug-xjfnb" Jan 07 04:33:47 crc kubenswrapper[4980]: I0107 04:33:47.749638 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f66e89bd-5d05-45bd-b8e3-15e9cd9d84be" path="/var/lib/kubelet/pods/f66e89bd-5d05-45bd-b8e3-15e9cd9d84be/volumes" Jan 07 04:33:48 crc kubenswrapper[4980]: I0107 04:33:48.558313 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ql5tl/crc-debug-xjfnb" event={"ID":"0c4d78eb-dd52-42b0-b741-e26927b6f16c","Type":"ContainerStarted","Data":"73ba47fa3ef14cffe2d6e8146a2d2e4393dc8d93012699263055ac813a37533a"} Jan 07 04:33:49 crc kubenswrapper[4980]: I0107 04:33:49.571772 4980 generic.go:334] "Generic (PLEG): container finished" podID="0c4d78eb-dd52-42b0-b741-e26927b6f16c" containerID="8b8298cfadce050bca97c552fad504331470bbc7d3324bf7363b651ffdb0a6d4" exitCode=0 Jan 07 04:33:49 crc kubenswrapper[4980]: I0107 04:33:49.571894 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ql5tl/crc-debug-xjfnb" event={"ID":"0c4d78eb-dd52-42b0-b741-e26927b6f16c","Type":"ContainerDied","Data":"8b8298cfadce050bca97c552fad504331470bbc7d3324bf7363b651ffdb0a6d4"} Jan 07 04:33:49 crc kubenswrapper[4980]: I0107 04:33:49.625196 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ql5tl/crc-debug-xjfnb"] Jan 07 04:33:49 crc kubenswrapper[4980]: I0107 04:33:49.632787 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ql5tl/crc-debug-xjfnb"] Jan 07 04:33:50 crc kubenswrapper[4980]: I0107 04:33:50.708844 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ql5tl/crc-debug-xjfnb" Jan 07 04:33:50 crc kubenswrapper[4980]: I0107 04:33:50.874578 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c4d78eb-dd52-42b0-b741-e26927b6f16c-host\") pod \"0c4d78eb-dd52-42b0-b741-e26927b6f16c\" (UID: \"0c4d78eb-dd52-42b0-b741-e26927b6f16c\") " Jan 07 04:33:50 crc kubenswrapper[4980]: I0107 04:33:50.874711 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c4d78eb-dd52-42b0-b741-e26927b6f16c-host" (OuterVolumeSpecName: "host") pod "0c4d78eb-dd52-42b0-b741-e26927b6f16c" (UID: "0c4d78eb-dd52-42b0-b741-e26927b6f16c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 04:33:50 crc kubenswrapper[4980]: I0107 04:33:50.874763 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tftpc\" (UniqueName: \"kubernetes.io/projected/0c4d78eb-dd52-42b0-b741-e26927b6f16c-kube-api-access-tftpc\") pod \"0c4d78eb-dd52-42b0-b741-e26927b6f16c\" (UID: \"0c4d78eb-dd52-42b0-b741-e26927b6f16c\") " Jan 07 04:33:50 crc kubenswrapper[4980]: I0107 04:33:50.875767 4980 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c4d78eb-dd52-42b0-b741-e26927b6f16c-host\") on node \"crc\" DevicePath \"\"" Jan 07 04:33:50 crc kubenswrapper[4980]: I0107 04:33:50.884431 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c4d78eb-dd52-42b0-b741-e26927b6f16c-kube-api-access-tftpc" (OuterVolumeSpecName: "kube-api-access-tftpc") pod "0c4d78eb-dd52-42b0-b741-e26927b6f16c" (UID: "0c4d78eb-dd52-42b0-b741-e26927b6f16c"). InnerVolumeSpecName "kube-api-access-tftpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:33:50 crc kubenswrapper[4980]: I0107 04:33:50.977498 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tftpc\" (UniqueName: \"kubernetes.io/projected/0c4d78eb-dd52-42b0-b741-e26927b6f16c-kube-api-access-tftpc\") on node \"crc\" DevicePath \"\"" Jan 07 04:33:51 crc kubenswrapper[4980]: I0107 04:33:51.599314 4980 scope.go:117] "RemoveContainer" containerID="8b8298cfadce050bca97c552fad504331470bbc7d3324bf7363b651ffdb0a6d4" Jan 07 04:33:51 crc kubenswrapper[4980]: I0107 04:33:51.599396 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ql5tl/crc-debug-xjfnb" Jan 07 04:33:51 crc kubenswrapper[4980]: I0107 04:33:51.750317 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c4d78eb-dd52-42b0-b741-e26927b6f16c" path="/var/lib/kubelet/pods/0c4d78eb-dd52-42b0-b741-e26927b6f16c/volumes" Jan 07 04:33:52 crc kubenswrapper[4980]: I0107 04:33:52.735576 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:33:52 crc kubenswrapper[4980]: E0107 04:33:52.736232 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:34:04 crc kubenswrapper[4980]: I0107 04:34:04.735940 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:34:04 crc kubenswrapper[4980]: E0107 04:34:04.736950 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:34:06 crc kubenswrapper[4980]: I0107 04:34:06.486047 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f67c95874-pm99w_bc960852-4c05-4805-8251-8336bb022087/barbican-api/0.log" Jan 07 04:34:06 crc kubenswrapper[4980]: I0107 04:34:06.622049 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f67c95874-pm99w_bc960852-4c05-4805-8251-8336bb022087/barbican-api-log/0.log" Jan 07 04:34:06 crc kubenswrapper[4980]: I0107 04:34:06.675285 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-78f9866cd4-xmbrp_eb08ed0c-20e9-44f7-9472-9d1899a51d32/barbican-keystone-listener/0.log" Jan 07 04:34:06 crc kubenswrapper[4980]: I0107 04:34:06.733443 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-78f9866cd4-xmbrp_eb08ed0c-20e9-44f7-9472-9d1899a51d32/barbican-keystone-listener-log/0.log" Jan 07 04:34:06 crc kubenswrapper[4980]: I0107 04:34:06.862240 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-558f69c5-b5wnd_2d2661ce-3148-48ac-a1b2-af154d207c5a/barbican-worker/0.log" Jan 07 04:34:06 crc kubenswrapper[4980]: I0107 04:34:06.882149 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-558f69c5-b5wnd_2d2661ce-3148-48ac-a1b2-af154d207c5a/barbican-worker-log/0.log" Jan 07 04:34:07 crc kubenswrapper[4980]: I0107 04:34:07.026517 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf_b4dce042-2c6f-4a74-bbb3-84a79cfb02a1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:34:07 crc kubenswrapper[4980]: I0107 04:34:07.126787 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8/ceilometer-central-agent/0.log" Jan 07 04:34:07 crc kubenswrapper[4980]: I0107 04:34:07.137619 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8/ceilometer-notification-agent/0.log" Jan 07 04:34:07 crc kubenswrapper[4980]: I0107 04:34:07.203181 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8/proxy-httpd/0.log" Jan 07 04:34:07 crc kubenswrapper[4980]: I0107 04:34:07.255310 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8/sg-core/0.log" Jan 07 04:34:07 crc kubenswrapper[4980]: I0107 04:34:07.374783 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a441a7ef-1973-4f21-8ec1-834904f5bcf7/cinder-api/0.log" Jan 07 04:34:07 crc kubenswrapper[4980]: I0107 04:34:07.406424 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a441a7ef-1973-4f21-8ec1-834904f5bcf7/cinder-api-log/0.log" Jan 07 04:34:07 crc kubenswrapper[4980]: I0107 04:34:07.548343 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e880ebfc-1037-4101-b489-84fc6660d45f/cinder-scheduler/0.log" Jan 07 04:34:07 crc kubenswrapper[4980]: I0107 04:34:07.589286 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e880ebfc-1037-4101-b489-84fc6660d45f/probe/0.log" Jan 07 04:34:07 crc kubenswrapper[4980]: I0107 04:34:07.730810 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-7pljr_67224118-a228-4d50-a70e-1d675bd7df2e/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:34:07 crc kubenswrapper[4980]: I0107 04:34:07.800059 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg_553631f5-8b26-4a24-bc27-cdbf1ad869db/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:34:07 crc kubenswrapper[4980]: I0107 04:34:07.898653 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-n8zmd_2ac37f68-f3d9-42eb-a68c-d2526b730663/init/0.log" Jan 07 04:34:08 crc kubenswrapper[4980]: I0107 04:34:08.115673 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-n8zmd_2ac37f68-f3d9-42eb-a68c-d2526b730663/init/0.log" Jan 07 04:34:08 crc kubenswrapper[4980]: I0107 04:34:08.186938 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp_111ee99f-4f5d-4647-9ee9-33addfaad13e/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:34:08 crc kubenswrapper[4980]: I0107 04:34:08.199298 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-n8zmd_2ac37f68-f3d9-42eb-a68c-d2526b730663/dnsmasq-dns/0.log" Jan 07 04:34:08 crc kubenswrapper[4980]: I0107 04:34:08.314547 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f122c82e-51a4-4b1c-8457-02b12f045c52/glance-httpd/0.log" Jan 07 04:34:08 crc kubenswrapper[4980]: I0107 04:34:08.361249 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f122c82e-51a4-4b1c-8457-02b12f045c52/glance-log/0.log" Jan 07 04:34:08 crc kubenswrapper[4980]: I0107 04:34:08.511054 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0/glance-log/0.log" Jan 07 04:34:08 crc kubenswrapper[4980]: I0107 04:34:08.527461 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0/glance-httpd/0.log" Jan 07 04:34:08 crc kubenswrapper[4980]: I0107 04:34:08.696328 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-565d4f6c4b-gj6mz_5d0304bc-69af-4a65-90e0-088a428990a1/horizon/0.log" Jan 07 04:34:08 crc kubenswrapper[4980]: I0107 04:34:08.825651 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-rks2b_010cdc43-6f59-4a62-b7ae-b98c5cdec4e4/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:34:08 crc kubenswrapper[4980]: I0107 04:34:08.958257 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-565d4f6c4b-gj6mz_5d0304bc-69af-4a65-90e0-088a428990a1/horizon-log/0.log" Jan 07 04:34:08 crc kubenswrapper[4980]: I0107 04:34:08.976471 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-r96jt_accf2eeb-147d-49a3-8aa3-06d9e52a2fb4/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:34:09 crc kubenswrapper[4980]: I0107 04:34:09.211241 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29462641-qspbr_f0d0d398-f0fc-4cec-abb7-7c5eca5254cd/keystone-cron/0.log" Jan 07 04:34:09 crc kubenswrapper[4980]: I0107 04:34:09.354382 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-85979fc5c6-rh7l6_ede07643-4b02-490a-a73d-e6c783a138e6/keystone-api/0.log" Jan 07 04:34:09 crc kubenswrapper[4980]: I0107 04:34:09.398345 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_96c1a5f6-5439-4aa4-a1c0-27408fbbe977/kube-state-metrics/0.log" Jan 07 04:34:09 crc kubenswrapper[4980]: I0107 04:34:09.563802 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-bngfg_952aa7ac-68e0-4f49-bd80-407e2181fa05/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:34:09 crc kubenswrapper[4980]: I0107 04:34:09.942273 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5ffd847cb9-kf6tb_6369f2cd-b133-42d0-bac5-f4790bf08ae5/neutron-httpd/0.log" Jan 07 04:34:10 crc kubenswrapper[4980]: I0107 04:34:10.006602 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5ffd847cb9-kf6tb_6369f2cd-b133-42d0-bac5-f4790bf08ae5/neutron-api/0.log" Jan 07 04:34:10 crc kubenswrapper[4980]: I0107 04:34:10.247740 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl_f588fdb7-1285-44cd-bf64-9b1681863e15/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:34:10 crc kubenswrapper[4980]: I0107 04:34:10.543376 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_111ec39e-2b02-4d0d-89cf-9484a6399fd7/nova-api-log/0.log" Jan 07 04:34:10 crc kubenswrapper[4980]: I0107 04:34:10.652164 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_86f2272b-45b2-490c-a64e-f4367491036b/nova-cell0-conductor-conductor/0.log" Jan 07 04:34:10 crc kubenswrapper[4980]: I0107 04:34:10.906326 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c840061e-cf97-4c53-b581-805806d7343c/nova-cell1-conductor-conductor/0.log" Jan 07 04:34:10 crc kubenswrapper[4980]: I0107 04:34:10.936590 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_111ec39e-2b02-4d0d-89cf-9484a6399fd7/nova-api-api/0.log" Jan 07 04:34:10 crc kubenswrapper[4980]: I0107 04:34:10.972862 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0fca998b-28f9-4611-99f7-2cb9f2cb8042/nova-cell1-novncproxy-novncproxy/0.log" Jan 07 04:34:11 crc kubenswrapper[4980]: I0107 04:34:11.142812 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-w2gvj_b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:34:11 crc kubenswrapper[4980]: I0107 04:34:11.240029 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c7266195-f7f5-40e2-9c60-97a0d6684272/nova-metadata-log/0.log" Jan 07 04:34:11 crc kubenswrapper[4980]: I0107 04:34:11.534599 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_62709b59-f907-4b1f-b0a4-bab71ce12d86/nova-scheduler-scheduler/0.log" Jan 07 04:34:11 crc kubenswrapper[4980]: I0107 04:34:11.643794 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3336b2a3-f175-44d1-9771-adabe71eea6c/mysql-bootstrap/0.log" Jan 07 04:34:11 crc kubenswrapper[4980]: I0107 04:34:11.802911 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3336b2a3-f175-44d1-9771-adabe71eea6c/mysql-bootstrap/0.log" Jan 07 04:34:11 crc kubenswrapper[4980]: I0107 04:34:11.857026 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3336b2a3-f175-44d1-9771-adabe71eea6c/galera/0.log" Jan 07 04:34:11 crc kubenswrapper[4980]: I0107 04:34:11.998084 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2/mysql-bootstrap/0.log" Jan 07 04:34:12 crc kubenswrapper[4980]: I0107 04:34:12.219239 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2/galera/0.log" Jan 07 04:34:12 crc kubenswrapper[4980]: I0107 04:34:12.298146 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2/mysql-bootstrap/0.log" Jan 07 04:34:12 crc kubenswrapper[4980]: I0107 04:34:12.428956 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_ec7c2df8-5955-4063-831a-7d1371e5e983/openstackclient/0.log" Jan 07 04:34:12 crc kubenswrapper[4980]: I0107 04:34:12.492245 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c7266195-f7f5-40e2-9c60-97a0d6684272/nova-metadata-metadata/0.log" Jan 07 04:34:12 crc kubenswrapper[4980]: I0107 04:34:12.580700 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-94rwj_4567269f-c5aa-44a8-8e68-c0dc01c2b55c/ovn-controller/0.log" Jan 07 04:34:12 crc kubenswrapper[4980]: I0107 04:34:12.726632 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hjmmm_b8f6a4d2-652b-4f9a-ad2e-b974c9062112/openstack-network-exporter/0.log" Jan 07 04:34:12 crc kubenswrapper[4980]: I0107 04:34:12.841428 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4nfg5_2b80c3b0-701f-4616-b851-c954a9421bf6/ovsdb-server-init/0.log" Jan 07 04:34:13 crc kubenswrapper[4980]: I0107 04:34:13.083397 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4nfg5_2b80c3b0-701f-4616-b851-c954a9421bf6/ovs-vswitchd/0.log" Jan 07 04:34:13 crc kubenswrapper[4980]: I0107 04:34:13.139883 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4nfg5_2b80c3b0-701f-4616-b851-c954a9421bf6/ovsdb-server/0.log" Jan 07 04:34:13 crc kubenswrapper[4980]: I0107 04:34:13.140254 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4nfg5_2b80c3b0-701f-4616-b851-c954a9421bf6/ovsdb-server-init/0.log" Jan 07 04:34:13 crc kubenswrapper[4980]: I0107 04:34:13.296054 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-w9lbc_b6c5efe0-317c-4de6-9d52-c8790db72ae6/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:34:13 crc kubenswrapper[4980]: I0107 04:34:13.374608 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b5687f55-2760-4b17-949f-7a691768ba40/openstack-network-exporter/0.log" Jan 07 04:34:13 crc kubenswrapper[4980]: I0107 04:34:13.471901 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b5687f55-2760-4b17-949f-7a691768ba40/ovn-northd/0.log" Jan 07 04:34:13 crc kubenswrapper[4980]: I0107 04:34:13.590186 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bc84f69f-9bab-40e5-80a8-75266ef8f4b7/openstack-network-exporter/0.log" Jan 07 04:34:13 crc kubenswrapper[4980]: I0107 04:34:13.704594 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bc84f69f-9bab-40e5-80a8-75266ef8f4b7/ovsdbserver-nb/0.log" Jan 07 04:34:13 crc kubenswrapper[4980]: I0107 04:34:13.743892 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_808ebed8-cef0-4938-9ad2-64f28d9c8af2/openstack-network-exporter/0.log" Jan 07 04:34:13 crc kubenswrapper[4980]: I0107 04:34:13.789378 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_808ebed8-cef0-4938-9ad2-64f28d9c8af2/ovsdbserver-sb/0.log" Jan 07 04:34:13 crc kubenswrapper[4980]: I0107 04:34:13.971057 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7f54697b86-x8z4v_0d68262c-96ba-42af-8b46-f13aa424ba0d/placement-api/0.log" Jan 07 04:34:14 crc kubenswrapper[4980]: I0107 04:34:14.064059 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7f54697b86-x8z4v_0d68262c-96ba-42af-8b46-f13aa424ba0d/placement-log/0.log" Jan 07 04:34:14 crc kubenswrapper[4980]: I0107 04:34:14.136136 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_af77d785-4fe8-4d72-a393-a7da215c4c55/setup-container/0.log" Jan 07 04:34:14 crc kubenswrapper[4980]: I0107 04:34:14.348230 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_837d407a-b0ff-4fec-8c21-e30b95cd3d7b/setup-container/0.log" Jan 07 04:34:14 crc kubenswrapper[4980]: I0107 04:34:14.355727 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_af77d785-4fe8-4d72-a393-a7da215c4c55/rabbitmq/0.log" Jan 07 04:34:14 crc kubenswrapper[4980]: I0107 04:34:14.373757 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_af77d785-4fe8-4d72-a393-a7da215c4c55/setup-container/0.log" Jan 07 04:34:14 crc kubenswrapper[4980]: I0107 04:34:14.526135 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_837d407a-b0ff-4fec-8c21-e30b95cd3d7b/setup-container/0.log" Jan 07 04:34:14 crc kubenswrapper[4980]: I0107 04:34:14.624011 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_837d407a-b0ff-4fec-8c21-e30b95cd3d7b/rabbitmq/0.log" Jan 07 04:34:14 crc kubenswrapper[4980]: I0107 04:34:14.628293 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv_5319b8be-e13c-4d5f-92d5-41d82748a080/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:34:14 crc kubenswrapper[4980]: I0107 04:34:14.820972 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-phv2v_3b42fafe-35e4-45a7-b3c9-95d8b9caa607/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:34:14 crc kubenswrapper[4980]: I0107 04:34:14.931736 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs_6c12fa92-7a85-42c7-90f2-3b837c2067f8/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:34:15 crc kubenswrapper[4980]: I0107 04:34:15.170527 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-tv787_841894d0-7f26-4642-ac09-1395082e288e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:34:15 crc kubenswrapper[4980]: I0107 04:34:15.185728 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-q252v_657c1546-50a5-49f6-9db2-a85ade05e059/ssh-known-hosts-edpm-deployment/0.log" Jan 07 04:34:15 crc kubenswrapper[4980]: I0107 04:34:15.372220 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85968999bf-kv4dj_df0744c9-9130-4abb-be49-156d72cc1a20/proxy-server/0.log" Jan 07 04:34:15 crc kubenswrapper[4980]: I0107 04:34:15.486383 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85968999bf-kv4dj_df0744c9-9130-4abb-be49-156d72cc1a20/proxy-httpd/0.log" Jan 07 04:34:15 crc kubenswrapper[4980]: I0107 04:34:15.538052 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-89vfr_a3566d37-de40-4834-9bbc-48dc6fe7e9c5/swift-ring-rebalance/0.log" Jan 07 04:34:15 crc kubenswrapper[4980]: I0107 04:34:15.619724 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/account-auditor/0.log" Jan 07 04:34:15 crc kubenswrapper[4980]: I0107 04:34:15.680310 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/account-reaper/0.log" Jan 07 04:34:15 crc kubenswrapper[4980]: I0107 04:34:15.731101 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/account-replicator/0.log" Jan 07 04:34:15 crc kubenswrapper[4980]: I0107 04:34:15.828411 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/container-auditor/0.log" Jan 07 04:34:15 crc kubenswrapper[4980]: I0107 04:34:15.875748 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/account-server/0.log" Jan 07 04:34:15 crc kubenswrapper[4980]: I0107 04:34:15.911949 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/container-replicator/0.log" Jan 07 04:34:15 crc kubenswrapper[4980]: I0107 04:34:15.950773 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/container-server/0.log" Jan 07 04:34:16 crc kubenswrapper[4980]: I0107 04:34:16.011355 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/container-updater/0.log" Jan 07 04:34:16 crc kubenswrapper[4980]: I0107 04:34:16.061791 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/object-auditor/0.log" Jan 07 04:34:16 crc kubenswrapper[4980]: I0107 04:34:16.160689 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/object-expirer/0.log" Jan 07 04:34:16 crc kubenswrapper[4980]: I0107 04:34:16.181033 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/object-replicator/0.log" Jan 07 04:34:16 crc kubenswrapper[4980]: I0107 04:34:16.210247 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/object-server/0.log" Jan 07 04:34:16 crc kubenswrapper[4980]: I0107 04:34:16.285685 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/object-updater/0.log" Jan 07 04:34:16 crc kubenswrapper[4980]: I0107 04:34:16.342941 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/swift-recon-cron/0.log" Jan 07 04:34:16 crc kubenswrapper[4980]: I0107 04:34:16.377405 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/rsync/0.log" Jan 07 04:34:16 crc kubenswrapper[4980]: I0107 04:34:16.543355 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb_7cd7afa6-8208-47e3-b598-0f2e8578dc3f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:34:16 crc kubenswrapper[4980]: I0107 04:34:16.589013 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_4d0a99f6-8fa8-40a7-b994-16a2e287c6ee/tempest-tests-tempest-tests-runner/0.log" Jan 07 04:34:16 crc kubenswrapper[4980]: I0107 04:34:16.768970 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_d8eaae2d-e134-40ee-b33a-51a04571798a/test-operator-logs-container/0.log" Jan 07 04:34:16 crc kubenswrapper[4980]: I0107 04:34:16.827848 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs_13eceb60-89ef-4f65-9639-7295976d7c72/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:34:19 crc kubenswrapper[4980]: I0107 04:34:19.735859 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:34:19 crc kubenswrapper[4980]: E0107 04:34:19.736386 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:34:26 crc kubenswrapper[4980]: I0107 04:34:26.143675 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_cf13ed1a-99f7-4574-a18a-7e559c48ddaa/memcached/0.log" Jan 07 04:34:32 crc kubenswrapper[4980]: I0107 04:34:32.736143 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:34:32 crc kubenswrapper[4980]: E0107 04:34:32.737105 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:34:42 crc kubenswrapper[4980]: I0107 04:34:42.661814 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k_1d43d97b-d62f-4e1e-b672-875c0dccca4e/util/0.log" Jan 07 04:34:42 crc kubenswrapper[4980]: I0107 04:34:42.838028 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k_1d43d97b-d62f-4e1e-b672-875c0dccca4e/util/0.log" Jan 07 04:34:42 crc kubenswrapper[4980]: I0107 04:34:42.877439 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k_1d43d97b-d62f-4e1e-b672-875c0dccca4e/pull/0.log" Jan 07 04:34:42 crc kubenswrapper[4980]: I0107 04:34:42.918894 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k_1d43d97b-d62f-4e1e-b672-875c0dccca4e/pull/0.log" Jan 07 04:34:43 crc kubenswrapper[4980]: I0107 04:34:43.079391 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k_1d43d97b-d62f-4e1e-b672-875c0dccca4e/util/0.log" Jan 07 04:34:43 crc kubenswrapper[4980]: I0107 04:34:43.081335 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k_1d43d97b-d62f-4e1e-b672-875c0dccca4e/pull/0.log" Jan 07 04:34:43 crc kubenswrapper[4980]: I0107 04:34:43.153259 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k_1d43d97b-d62f-4e1e-b672-875c0dccca4e/extract/0.log" Jan 07 04:34:43 crc kubenswrapper[4980]: I0107 04:34:43.288366 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-f6f74d6db-mqsl2_2e8333e2-a664-4f9a-8ddb-07e31ddc3020/manager/0.log" Jan 07 04:34:43 crc kubenswrapper[4980]: I0107 04:34:43.337097 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78979fc445-qj7hn_81f997a4-1aea-45d5-bd2f-8e6d1e8fdc61/manager/0.log" Jan 07 04:34:43 crc kubenswrapper[4980]: I0107 04:34:43.486407 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-8rjjc_cc01f5c0-320a-4645-bb96-5bd8b6490e08/manager/0.log" Jan 07 04:34:43 crc kubenswrapper[4980]: I0107 04:34:43.586760 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7b549fc966-gnrv9_920991d2-089f-4864-8237-9684c6282a04/manager/0.log" Jan 07 04:34:43 crc kubenswrapper[4980]: I0107 04:34:43.697561 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-vghwj_93a3e6e3-bd9b-4883-923c-6d58ae83000d/manager/0.log" Jan 07 04:34:43 crc kubenswrapper[4980]: I0107 04:34:43.817314 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-r9bm7_509933ce-8dca-4f14-bdc4-a5f1608954b3/manager/0.log" Jan 07 04:34:44 crc kubenswrapper[4980]: I0107 04:34:44.031711 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-s5s7g_0b63f351-f7ac-44a4-8a65-a6357043af12/manager/0.log" Jan 07 04:34:44 crc kubenswrapper[4980]: I0107 04:34:44.130888 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-c5hk9_4223a956-7692-4bcc-8193-02312792b1f9/manager/0.log" Jan 07 04:34:44 crc kubenswrapper[4980]: I0107 04:34:44.251865 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-ndxck_d8d586b5-b752-4122-99af-ba4ce3bbad29/manager/0.log" Jan 07 04:34:44 crc kubenswrapper[4980]: I0107 04:34:44.275600 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-5jzx4_86933336-6f6c-4327-bcde-a4d1a6caba77/manager/0.log" Jan 07 04:34:44 crc kubenswrapper[4980]: I0107 04:34:44.423998 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-cc9rk_edf44de7-04e1-435c-a943-c47873d4e364/manager/0.log" Jan 07 04:34:44 crc kubenswrapper[4980]: I0107 04:34:44.497567 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-kpmrn_82ed1518-12d9-412b-86cc-03fbb1f74bd6/manager/0.log" Jan 07 04:34:44 crc kubenswrapper[4980]: I0107 04:34:44.702339 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-55fcj_96049d0d-7c90-4cab-a18c-5fbd4e9f8373/manager/0.log" Jan 07 04:34:44 crc kubenswrapper[4980]: I0107 04:34:44.710699 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-fd8dn_881d9164-37f7-48da-b203-a2e5db8e2d23/manager/0.log" Jan 07 04:34:44 crc kubenswrapper[4980]: I0107 04:34:44.823769 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-78948ddfd72mjfc_b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c/manager/0.log" Jan 07 04:34:45 crc kubenswrapper[4980]: I0107 04:34:45.227018 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-54bc58988c-zrwgp_4af3dbe2-f463-48b6-9264-9d8ad4970648/operator/0.log" Jan 07 04:34:45 crc kubenswrapper[4980]: I0107 04:34:45.278540 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-q54qd_5cb72193-59c1-49a9-b1fd-26191d36f265/registry-server/0.log" Jan 07 04:34:45 crc kubenswrapper[4980]: I0107 04:34:45.635224 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-fdnhb_c86a562f-bdd6-4463-8edc-6ce72f41af16/manager/0.log" Jan 07 04:34:45 crc kubenswrapper[4980]: I0107 04:34:45.854830 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-d25kc_58abb189-9361-4eac-8663-55e110e21383/manager/0.log" Jan 07 04:34:45 crc kubenswrapper[4980]: I0107 04:34:45.903112 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fxr6h_e4c6355b-ca56-47e9-897e-ed6b641d456a/operator/0.log" Jan 07 04:34:45 crc kubenswrapper[4980]: I0107 04:34:45.953792 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7bbf496545-vdwhj_4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3/manager/0.log" Jan 07 04:34:46 crc kubenswrapper[4980]: I0107 04:34:46.464534 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-8qlqr_b6565eee-ab9b-4a1a-a5a8-6036df399731/manager/0.log" Jan 07 04:34:46 crc kubenswrapper[4980]: I0107 04:34:46.465302 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-b4dq4_e3f2e1ae-fa58-4090-909d-4efdacb15545/manager/0.log" Jan 07 04:34:46 crc kubenswrapper[4980]: I0107 04:34:46.624795 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-qlpkh_c1ae6abf-8410-4816-b2a1-b6a9f0550eb2/manager/0.log" Jan 07 04:34:46 crc kubenswrapper[4980]: I0107 04:34:46.666244 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-68d988df55-d7p5d_28cf4151-f7be-4992-87f8-e34bf1d0a9c0/manager/0.log" Jan 07 04:34:47 crc kubenswrapper[4980]: I0107 04:34:47.735685 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:34:47 crc kubenswrapper[4980]: E0107 04:34:47.736180 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:35:02 crc kubenswrapper[4980]: I0107 04:35:02.736374 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:35:02 crc kubenswrapper[4980]: E0107 04:35:02.737049 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:35:07 crc kubenswrapper[4980]: I0107 04:35:07.067052 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xtj2h_7e4e8bcd-d566-43ed-ba1d-e5c367faca7d/control-plane-machine-set-operator/0.log" Jan 07 04:35:07 crc kubenswrapper[4980]: I0107 04:35:07.163154 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-sjzgd_b7a65dcf-9933-4d66-92c4-e1c9d9e209e9/kube-rbac-proxy/0.log" Jan 07 04:35:07 crc kubenswrapper[4980]: I0107 04:35:07.240481 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-sjzgd_b7a65dcf-9933-4d66-92c4-e1c9d9e209e9/machine-api-operator/0.log" Jan 07 04:35:14 crc kubenswrapper[4980]: I0107 04:35:14.736564 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:35:14 crc kubenswrapper[4980]: E0107 04:35:14.737343 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:35:22 crc kubenswrapper[4980]: I0107 04:35:22.615130 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vchrs_a5762e00-3e81-401d-8365-8d6791ecbf4f/cert-manager-controller/0.log" Jan 07 04:35:22 crc kubenswrapper[4980]: I0107 04:35:22.798935 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-dfqdk_c7ebfede-1363-495d-b143-2e8db44394c0/cert-manager-cainjector/0.log" Jan 07 04:35:22 crc kubenswrapper[4980]: I0107 04:35:22.914008 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-25nfw_59f9bc30-5a23-4161-95fa-68d941208670/cert-manager-webhook/0.log" Jan 07 04:35:28 crc kubenswrapper[4980]: I0107 04:35:28.736550 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:35:28 crc kubenswrapper[4980]: E0107 04:35:28.737767 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:35:39 crc kubenswrapper[4980]: I0107 04:35:39.240222 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-rc5d6_680a6c39-957f-43ff-82e4-c70f626c14c6/nmstate-console-plugin/0.log" Jan 07 04:35:39 crc kubenswrapper[4980]: I0107 04:35:39.251177 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-fc77g_c5c3ce37-b71d-4353-b725-a82d5aeb2f81/nmstate-handler/0.log" Jan 07 04:35:39 crc kubenswrapper[4980]: I0107 04:35:39.430528 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-gd4b8_550a6236-4f98-4b9a-ad9d-bce2a985a853/nmstate-metrics/0.log" Jan 07 04:35:39 crc kubenswrapper[4980]: I0107 04:35:39.439180 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-gd4b8_550a6236-4f98-4b9a-ad9d-bce2a985a853/kube-rbac-proxy/0.log" Jan 07 04:35:39 crc kubenswrapper[4980]: I0107 04:35:39.528674 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-jjfdp_b6737cd2-b163-4a8a-a674-54ba3a715f91/nmstate-operator/0.log" Jan 07 04:35:39 crc kubenswrapper[4980]: I0107 04:35:39.639726 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-mspkv_63e6b18c-21c5-4d1d-85b9-0db97630b4b8/nmstate-webhook/0.log" Jan 07 04:35:39 crc kubenswrapper[4980]: I0107 04:35:39.735581 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:35:39 crc kubenswrapper[4980]: E0107 04:35:39.735857 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:35:50 crc kubenswrapper[4980]: I0107 04:35:50.735542 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:35:50 crc kubenswrapper[4980]: E0107 04:35:50.736275 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:35:55 crc kubenswrapper[4980]: I0107 04:35:55.354413 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-vg2gl_fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3/kube-rbac-proxy/0.log" Jan 07 04:35:55 crc kubenswrapper[4980]: I0107 04:35:55.441836 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-vg2gl_fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3/controller/0.log" Jan 07 04:35:55 crc kubenswrapper[4980]: I0107 04:35:55.543609 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-frr-files/0.log" Jan 07 04:35:55 crc kubenswrapper[4980]: I0107 04:35:55.750459 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-frr-files/0.log" Jan 07 04:35:55 crc kubenswrapper[4980]: I0107 04:35:55.769313 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-reloader/0.log" Jan 07 04:35:55 crc kubenswrapper[4980]: I0107 04:35:55.776773 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-reloader/0.log" Jan 07 04:35:55 crc kubenswrapper[4980]: I0107 04:35:55.792141 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-metrics/0.log" Jan 07 04:35:55 crc kubenswrapper[4980]: I0107 04:35:55.945706 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-reloader/0.log" Jan 07 04:35:55 crc kubenswrapper[4980]: I0107 04:35:55.950058 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-frr-files/0.log" Jan 07 04:35:55 crc kubenswrapper[4980]: I0107 04:35:55.967777 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-metrics/0.log" Jan 07 04:35:55 crc kubenswrapper[4980]: I0107 04:35:55.995802 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-metrics/0.log" Jan 07 04:35:56 crc kubenswrapper[4980]: I0107 04:35:56.158422 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-metrics/0.log" Jan 07 04:35:56 crc kubenswrapper[4980]: I0107 04:35:56.180937 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-frr-files/0.log" Jan 07 04:35:56 crc kubenswrapper[4980]: I0107 04:35:56.183947 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/controller/0.log" Jan 07 04:35:56 crc kubenswrapper[4980]: I0107 04:35:56.187925 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-reloader/0.log" Jan 07 04:35:56 crc kubenswrapper[4980]: I0107 04:35:56.352669 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/kube-rbac-proxy-frr/0.log" Jan 07 04:35:56 crc kubenswrapper[4980]: I0107 04:35:56.352761 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/kube-rbac-proxy/0.log" Jan 07 04:35:56 crc kubenswrapper[4980]: I0107 04:35:56.446135 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/frr-metrics/0.log" Jan 07 04:35:56 crc kubenswrapper[4980]: I0107 04:35:56.518490 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/reloader/0.log" Jan 07 04:35:56 crc kubenswrapper[4980]: I0107 04:35:56.670263 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-bn2x7_dc6c1183-144b-4b67-baad-9e04c4492453/frr-k8s-webhook-server/0.log" Jan 07 04:35:56 crc kubenswrapper[4980]: I0107 04:35:56.761886 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-65b595589f-ljh57_a2680c24-a9d4-4daa-9ed0-3bc391695662/manager/0.log" Jan 07 04:35:56 crc kubenswrapper[4980]: I0107 04:35:56.957490 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-65b4bf7cb4-dm7j7_db5d65a6-7f55-491f-8ea1-e6f3c1715c00/webhook-server/0.log" Jan 07 04:35:57 crc kubenswrapper[4980]: I0107 04:35:57.076217 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7qwgg_bf49087b-cf7a-41cf-85a4-e76d00ae1381/kube-rbac-proxy/0.log" Jan 07 04:35:57 crc kubenswrapper[4980]: I0107 04:35:57.562710 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/frr/0.log" Jan 07 04:35:57 crc kubenswrapper[4980]: I0107 04:35:57.615892 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7qwgg_bf49087b-cf7a-41cf-85a4-e76d00ae1381/speaker/0.log" Jan 07 04:36:03 crc kubenswrapper[4980]: I0107 04:36:03.744828 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:36:03 crc kubenswrapper[4980]: E0107 04:36:03.745593 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:36:11 crc kubenswrapper[4980]: I0107 04:36:11.355944 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l_8424289d-d257-486f-a82a-8d8cec374808/util/0.log" Jan 07 04:36:11 crc kubenswrapper[4980]: I0107 04:36:11.499709 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l_8424289d-d257-486f-a82a-8d8cec374808/util/0.log" Jan 07 04:36:11 crc kubenswrapper[4980]: I0107 04:36:11.545051 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l_8424289d-d257-486f-a82a-8d8cec374808/pull/0.log" Jan 07 04:36:11 crc kubenswrapper[4980]: I0107 04:36:11.545123 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l_8424289d-d257-486f-a82a-8d8cec374808/pull/0.log" Jan 07 04:36:11 crc kubenswrapper[4980]: I0107 04:36:11.679540 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l_8424289d-d257-486f-a82a-8d8cec374808/pull/0.log" Jan 07 04:36:11 crc kubenswrapper[4980]: I0107 04:36:11.693112 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l_8424289d-d257-486f-a82a-8d8cec374808/util/0.log" Jan 07 04:36:11 crc kubenswrapper[4980]: I0107 04:36:11.718028 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l_8424289d-d257-486f-a82a-8d8cec374808/extract/0.log" Jan 07 04:36:11 crc kubenswrapper[4980]: I0107 04:36:11.867505 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l_48de5406-371a-47e2-90d8-d6fd88506301/util/0.log" Jan 07 04:36:11 crc kubenswrapper[4980]: I0107 04:36:11.984740 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l_48de5406-371a-47e2-90d8-d6fd88506301/util/0.log" Jan 07 04:36:12 crc kubenswrapper[4980]: I0107 04:36:12.027030 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l_48de5406-371a-47e2-90d8-d6fd88506301/pull/0.log" Jan 07 04:36:12 crc kubenswrapper[4980]: I0107 04:36:12.057650 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l_48de5406-371a-47e2-90d8-d6fd88506301/pull/0.log" Jan 07 04:36:12 crc kubenswrapper[4980]: I0107 04:36:12.193928 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l_48de5406-371a-47e2-90d8-d6fd88506301/pull/0.log" Jan 07 04:36:12 crc kubenswrapper[4980]: I0107 04:36:12.195092 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l_48de5406-371a-47e2-90d8-d6fd88506301/util/0.log" Jan 07 04:36:12 crc kubenswrapper[4980]: I0107 04:36:12.233648 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l_48de5406-371a-47e2-90d8-d6fd88506301/extract/0.log" Jan 07 04:36:12 crc kubenswrapper[4980]: I0107 04:36:12.377970 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcz5z_acd8b20e-5d5b-4f22-b29e-109b6e039ad9/extract-utilities/0.log" Jan 07 04:36:12 crc kubenswrapper[4980]: I0107 04:36:12.545586 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcz5z_acd8b20e-5d5b-4f22-b29e-109b6e039ad9/extract-utilities/0.log" Jan 07 04:36:12 crc kubenswrapper[4980]: I0107 04:36:12.551159 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcz5z_acd8b20e-5d5b-4f22-b29e-109b6e039ad9/extract-content/0.log" Jan 07 04:36:12 crc kubenswrapper[4980]: I0107 04:36:12.566014 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcz5z_acd8b20e-5d5b-4f22-b29e-109b6e039ad9/extract-content/0.log" Jan 07 04:36:12 crc kubenswrapper[4980]: I0107 04:36:12.705910 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcz5z_acd8b20e-5d5b-4f22-b29e-109b6e039ad9/extract-utilities/0.log" Jan 07 04:36:12 crc kubenswrapper[4980]: I0107 04:36:12.723000 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcz5z_acd8b20e-5d5b-4f22-b29e-109b6e039ad9/extract-content/0.log" Jan 07 04:36:12 crc kubenswrapper[4980]: I0107 04:36:12.927301 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pgfdc_d57ca4dc-5247-4b77-808b-c7e095b4b167/extract-utilities/0.log" Jan 07 04:36:13 crc kubenswrapper[4980]: I0107 04:36:13.081603 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pgfdc_d57ca4dc-5247-4b77-808b-c7e095b4b167/extract-content/0.log" Jan 07 04:36:13 crc kubenswrapper[4980]: I0107 04:36:13.132282 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pgfdc_d57ca4dc-5247-4b77-808b-c7e095b4b167/extract-content/0.log" Jan 07 04:36:13 crc kubenswrapper[4980]: I0107 04:36:13.139636 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pgfdc_d57ca4dc-5247-4b77-808b-c7e095b4b167/extract-utilities/0.log" Jan 07 04:36:13 crc kubenswrapper[4980]: I0107 04:36:13.285620 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcz5z_acd8b20e-5d5b-4f22-b29e-109b6e039ad9/registry-server/0.log" Jan 07 04:36:13 crc kubenswrapper[4980]: I0107 04:36:13.350375 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pgfdc_d57ca4dc-5247-4b77-808b-c7e095b4b167/extract-content/0.log" Jan 07 04:36:13 crc kubenswrapper[4980]: I0107 04:36:13.362906 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pgfdc_d57ca4dc-5247-4b77-808b-c7e095b4b167/extract-utilities/0.log" Jan 07 04:36:13 crc kubenswrapper[4980]: I0107 04:36:13.585976 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-t5mfp_cbc2df67-d00a-4200-b46f-b9eca0da0f4f/marketplace-operator/0.log" Jan 07 04:36:13 crc kubenswrapper[4980]: I0107 04:36:13.808517 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pgfdc_d57ca4dc-5247-4b77-808b-c7e095b4b167/registry-server/0.log" Jan 07 04:36:13 crc kubenswrapper[4980]: I0107 04:36:13.876372 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qdt7j_4a3ee567-8dae-40c8-9f42-6a01ec72b480/extract-utilities/0.log" Jan 07 04:36:14 crc kubenswrapper[4980]: I0107 04:36:14.040036 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qdt7j_4a3ee567-8dae-40c8-9f42-6a01ec72b480/extract-content/0.log" Jan 07 04:36:14 crc kubenswrapper[4980]: I0107 04:36:14.042289 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qdt7j_4a3ee567-8dae-40c8-9f42-6a01ec72b480/extract-content/0.log" Jan 07 04:36:14 crc kubenswrapper[4980]: I0107 04:36:14.045497 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qdt7j_4a3ee567-8dae-40c8-9f42-6a01ec72b480/extract-utilities/0.log" Jan 07 04:36:14 crc kubenswrapper[4980]: I0107 04:36:14.188089 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qdt7j_4a3ee567-8dae-40c8-9f42-6a01ec72b480/extract-content/0.log" Jan 07 04:36:14 crc kubenswrapper[4980]: I0107 04:36:14.244302 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qdt7j_4a3ee567-8dae-40c8-9f42-6a01ec72b480/extract-utilities/0.log" Jan 07 04:36:14 crc kubenswrapper[4980]: I0107 04:36:14.361396 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5h6xq_edc1e23f-0cd2-4ab5-be99-0e90aa809529/extract-utilities/0.log" Jan 07 04:36:14 crc kubenswrapper[4980]: I0107 04:36:14.387648 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qdt7j_4a3ee567-8dae-40c8-9f42-6a01ec72b480/registry-server/0.log" Jan 07 04:36:14 crc kubenswrapper[4980]: I0107 04:36:14.554836 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5h6xq_edc1e23f-0cd2-4ab5-be99-0e90aa809529/extract-content/0.log" Jan 07 04:36:14 crc kubenswrapper[4980]: I0107 04:36:14.560525 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5h6xq_edc1e23f-0cd2-4ab5-be99-0e90aa809529/extract-utilities/0.log" Jan 07 04:36:14 crc kubenswrapper[4980]: I0107 04:36:14.585508 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5h6xq_edc1e23f-0cd2-4ab5-be99-0e90aa809529/extract-content/0.log" Jan 07 04:36:14 crc kubenswrapper[4980]: I0107 04:36:14.754219 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5h6xq_edc1e23f-0cd2-4ab5-be99-0e90aa809529/extract-content/0.log" Jan 07 04:36:14 crc kubenswrapper[4980]: I0107 04:36:14.778698 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5h6xq_edc1e23f-0cd2-4ab5-be99-0e90aa809529/extract-utilities/0.log" Jan 07 04:36:15 crc kubenswrapper[4980]: I0107 04:36:15.215838 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5h6xq_edc1e23f-0cd2-4ab5-be99-0e90aa809529/registry-server/0.log" Jan 07 04:36:18 crc kubenswrapper[4980]: I0107 04:36:18.736044 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:36:18 crc kubenswrapper[4980]: E0107 04:36:18.736834 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:36:31 crc kubenswrapper[4980]: I0107 04:36:31.736155 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:36:31 crc kubenswrapper[4980]: E0107 04:36:31.736954 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:36:38 crc kubenswrapper[4980]: E0107 04:36:38.463917 4980 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.65:54744->38.102.83.65:38867: write tcp 38.102.83.65:54744->38.102.83.65:38867: write: broken pipe Jan 07 04:36:42 crc kubenswrapper[4980]: I0107 04:36:42.736746 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:36:42 crc kubenswrapper[4980]: E0107 04:36:42.737793 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:36:57 crc kubenswrapper[4980]: I0107 04:36:57.737278 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:36:57 crc kubenswrapper[4980]: E0107 04:36:57.738253 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:37:08 crc kubenswrapper[4980]: I0107 04:37:08.735749 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:37:08 crc kubenswrapper[4980]: E0107 04:37:08.736997 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:37:19 crc kubenswrapper[4980]: I0107 04:37:19.735295 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:37:19 crc kubenswrapper[4980]: E0107 04:37:19.735896 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:37:31 crc kubenswrapper[4980]: I0107 04:37:31.736111 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:37:31 crc kubenswrapper[4980]: E0107 04:37:31.737277 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:37:43 crc kubenswrapper[4980]: I0107 04:37:43.753911 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:37:45 crc kubenswrapper[4980]: I0107 04:37:45.045877 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"734a74dc3c192c49aa206faa107f7ca49bb64ac582050f82bf6fdf1e63cf5035"} Jan 07 04:37:49 crc kubenswrapper[4980]: I0107 04:37:49.036351 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h2g4f"] Jan 07 04:37:49 crc kubenswrapper[4980]: E0107 04:37:49.038492 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c4d78eb-dd52-42b0-b741-e26927b6f16c" containerName="container-00" Jan 07 04:37:49 crc kubenswrapper[4980]: I0107 04:37:49.038591 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c4d78eb-dd52-42b0-b741-e26927b6f16c" containerName="container-00" Jan 07 04:37:49 crc kubenswrapper[4980]: I0107 04:37:49.038879 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c4d78eb-dd52-42b0-b741-e26927b6f16c" containerName="container-00" Jan 07 04:37:49 crc kubenswrapper[4980]: I0107 04:37:49.040433 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2g4f" Jan 07 04:37:49 crc kubenswrapper[4980]: I0107 04:37:49.057304 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2g4f"] Jan 07 04:37:49 crc kubenswrapper[4980]: I0107 04:37:49.182744 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d381ae9b-32da-4f00-9129-6cd97551a9c6-utilities\") pod \"redhat-marketplace-h2g4f\" (UID: \"d381ae9b-32da-4f00-9129-6cd97551a9c6\") " pod="openshift-marketplace/redhat-marketplace-h2g4f" Jan 07 04:37:49 crc kubenswrapper[4980]: I0107 04:37:49.182972 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzr6f\" (UniqueName: \"kubernetes.io/projected/d381ae9b-32da-4f00-9129-6cd97551a9c6-kube-api-access-jzr6f\") pod \"redhat-marketplace-h2g4f\" (UID: \"d381ae9b-32da-4f00-9129-6cd97551a9c6\") " pod="openshift-marketplace/redhat-marketplace-h2g4f" Jan 07 04:37:49 crc kubenswrapper[4980]: I0107 04:37:49.183091 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d381ae9b-32da-4f00-9129-6cd97551a9c6-catalog-content\") pod \"redhat-marketplace-h2g4f\" (UID: \"d381ae9b-32da-4f00-9129-6cd97551a9c6\") " pod="openshift-marketplace/redhat-marketplace-h2g4f" Jan 07 04:37:49 crc kubenswrapper[4980]: I0107 04:37:49.284462 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d381ae9b-32da-4f00-9129-6cd97551a9c6-utilities\") pod \"redhat-marketplace-h2g4f\" (UID: \"d381ae9b-32da-4f00-9129-6cd97551a9c6\") " pod="openshift-marketplace/redhat-marketplace-h2g4f" Jan 07 04:37:49 crc kubenswrapper[4980]: I0107 04:37:49.284790 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzr6f\" (UniqueName: \"kubernetes.io/projected/d381ae9b-32da-4f00-9129-6cd97551a9c6-kube-api-access-jzr6f\") pod \"redhat-marketplace-h2g4f\" (UID: \"d381ae9b-32da-4f00-9129-6cd97551a9c6\") " pod="openshift-marketplace/redhat-marketplace-h2g4f" Jan 07 04:37:49 crc kubenswrapper[4980]: I0107 04:37:49.284881 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d381ae9b-32da-4f00-9129-6cd97551a9c6-catalog-content\") pod \"redhat-marketplace-h2g4f\" (UID: \"d381ae9b-32da-4f00-9129-6cd97551a9c6\") " pod="openshift-marketplace/redhat-marketplace-h2g4f" Jan 07 04:37:49 crc kubenswrapper[4980]: I0107 04:37:49.285115 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d381ae9b-32da-4f00-9129-6cd97551a9c6-utilities\") pod \"redhat-marketplace-h2g4f\" (UID: \"d381ae9b-32da-4f00-9129-6cd97551a9c6\") " pod="openshift-marketplace/redhat-marketplace-h2g4f" Jan 07 04:37:49 crc kubenswrapper[4980]: I0107 04:37:49.285376 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d381ae9b-32da-4f00-9129-6cd97551a9c6-catalog-content\") pod \"redhat-marketplace-h2g4f\" (UID: \"d381ae9b-32da-4f00-9129-6cd97551a9c6\") " pod="openshift-marketplace/redhat-marketplace-h2g4f" Jan 07 04:37:49 crc kubenswrapper[4980]: I0107 04:37:49.306482 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzr6f\" (UniqueName: \"kubernetes.io/projected/d381ae9b-32da-4f00-9129-6cd97551a9c6-kube-api-access-jzr6f\") pod \"redhat-marketplace-h2g4f\" (UID: \"d381ae9b-32da-4f00-9129-6cd97551a9c6\") " pod="openshift-marketplace/redhat-marketplace-h2g4f" Jan 07 04:37:49 crc kubenswrapper[4980]: I0107 04:37:49.410388 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2g4f" Jan 07 04:37:49 crc kubenswrapper[4980]: I0107 04:37:49.916911 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2g4f"] Jan 07 04:37:50 crc kubenswrapper[4980]: I0107 04:37:50.131828 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2g4f" event={"ID":"d381ae9b-32da-4f00-9129-6cd97551a9c6","Type":"ContainerStarted","Data":"2f6bc2473cf44821bf2a2fad94a0a0515f175c639d8e69acfc819625ee3bacf4"} Jan 07 04:37:50 crc kubenswrapper[4980]: I0107 04:37:50.132092 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2g4f" event={"ID":"d381ae9b-32da-4f00-9129-6cd97551a9c6","Type":"ContainerStarted","Data":"7b45edb094b5b3b91b6ad519b1b344b794e2bcb32b5d19807be4dc1abaaecdf5"} Jan 07 04:37:50 crc kubenswrapper[4980]: I0107 04:37:50.135432 4980 generic.go:334] "Generic (PLEG): container finished" podID="9627588f-8915-4b95-b0df-ef9e7abb9009" containerID="3e9a9f3ce8302a04b80596ce13f668a8b7e670fdb9e9ae871bc285fcd3f4fdee" exitCode=0 Jan 07 04:37:50 crc kubenswrapper[4980]: I0107 04:37:50.135462 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ql5tl/must-gather-786sv" event={"ID":"9627588f-8915-4b95-b0df-ef9e7abb9009","Type":"ContainerDied","Data":"3e9a9f3ce8302a04b80596ce13f668a8b7e670fdb9e9ae871bc285fcd3f4fdee"} Jan 07 04:37:50 crc kubenswrapper[4980]: I0107 04:37:50.136134 4980 scope.go:117] "RemoveContainer" containerID="3e9a9f3ce8302a04b80596ce13f668a8b7e670fdb9e9ae871bc285fcd3f4fdee" Jan 07 04:37:51 crc kubenswrapper[4980]: I0107 04:37:51.128802 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ql5tl_must-gather-786sv_9627588f-8915-4b95-b0df-ef9e7abb9009/gather/0.log" Jan 07 04:37:51 crc kubenswrapper[4980]: I0107 04:37:51.150622 4980 generic.go:334] "Generic (PLEG): container finished" podID="d381ae9b-32da-4f00-9129-6cd97551a9c6" containerID="2f6bc2473cf44821bf2a2fad94a0a0515f175c639d8e69acfc819625ee3bacf4" exitCode=0 Jan 07 04:37:51 crc kubenswrapper[4980]: I0107 04:37:51.150718 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2g4f" event={"ID":"d381ae9b-32da-4f00-9129-6cd97551a9c6","Type":"ContainerDied","Data":"2f6bc2473cf44821bf2a2fad94a0a0515f175c639d8e69acfc819625ee3bacf4"} Jan 07 04:37:51 crc kubenswrapper[4980]: I0107 04:37:51.156264 4980 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 04:37:52 crc kubenswrapper[4980]: I0107 04:37:52.162455 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2g4f" event={"ID":"d381ae9b-32da-4f00-9129-6cd97551a9c6","Type":"ContainerStarted","Data":"044fccf924703d1b0342dc70714e46848f7ded217caa28e0b31caa5261045cfa"} Jan 07 04:37:53 crc kubenswrapper[4980]: I0107 04:37:53.176881 4980 generic.go:334] "Generic (PLEG): container finished" podID="d381ae9b-32da-4f00-9129-6cd97551a9c6" containerID="044fccf924703d1b0342dc70714e46848f7ded217caa28e0b31caa5261045cfa" exitCode=0 Jan 07 04:37:53 crc kubenswrapper[4980]: I0107 04:37:53.177086 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2g4f" event={"ID":"d381ae9b-32da-4f00-9129-6cd97551a9c6","Type":"ContainerDied","Data":"044fccf924703d1b0342dc70714e46848f7ded217caa28e0b31caa5261045cfa"} Jan 07 04:37:54 crc kubenswrapper[4980]: I0107 04:37:54.192907 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2g4f" event={"ID":"d381ae9b-32da-4f00-9129-6cd97551a9c6","Type":"ContainerStarted","Data":"577b1d70bfd2188f7d5b1b2ac6257e97347ce7b11274242efe92d9510fd9881f"} Jan 07 04:37:54 crc kubenswrapper[4980]: I0107 04:37:54.224643 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h2g4f" podStartSLOduration=2.73770064 podStartE2EDuration="5.224619286s" podCreationTimestamp="2026-01-07 04:37:49 +0000 UTC" firstStartedPulling="2026-01-07 04:37:51.155891627 +0000 UTC m=+3917.721586382" lastFinishedPulling="2026-01-07 04:37:53.642810253 +0000 UTC m=+3920.208505028" observedRunningTime="2026-01-07 04:37:54.221049016 +0000 UTC m=+3920.786743761" watchObservedRunningTime="2026-01-07 04:37:54.224619286 +0000 UTC m=+3920.790314051" Jan 07 04:37:58 crc kubenswrapper[4980]: I0107 04:37:58.962147 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ql5tl/must-gather-786sv"] Jan 07 04:37:58 crc kubenswrapper[4980]: I0107 04:37:58.963162 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ql5tl/must-gather-786sv" podUID="9627588f-8915-4b95-b0df-ef9e7abb9009" containerName="copy" containerID="cri-o://491354541b664ff9f4a1308d7a7840cab7e9f815366aa67b7d7b9628dae8fd26" gracePeriod=2 Jan 07 04:37:58 crc kubenswrapper[4980]: I0107 04:37:58.968529 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ql5tl/must-gather-786sv"] Jan 07 04:37:59 crc kubenswrapper[4980]: I0107 04:37:59.286249 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ql5tl_must-gather-786sv_9627588f-8915-4b95-b0df-ef9e7abb9009/copy/0.log" Jan 07 04:37:59 crc kubenswrapper[4980]: I0107 04:37:59.287003 4980 generic.go:334] "Generic (PLEG): container finished" podID="9627588f-8915-4b95-b0df-ef9e7abb9009" containerID="491354541b664ff9f4a1308d7a7840cab7e9f815366aa67b7d7b9628dae8fd26" exitCode=143 Jan 07 04:37:59 crc kubenswrapper[4980]: I0107 04:37:59.410753 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h2g4f" Jan 07 04:37:59 crc kubenswrapper[4980]: I0107 04:37:59.410814 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h2g4f" Jan 07 04:37:59 crc kubenswrapper[4980]: I0107 04:37:59.501639 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h2g4f" Jan 07 04:37:59 crc kubenswrapper[4980]: I0107 04:37:59.507130 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ql5tl_must-gather-786sv_9627588f-8915-4b95-b0df-ef9e7abb9009/copy/0.log" Jan 07 04:37:59 crc kubenswrapper[4980]: I0107 04:37:59.507624 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ql5tl/must-gather-786sv" Jan 07 04:37:59 crc kubenswrapper[4980]: I0107 04:37:59.547046 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4z2x\" (UniqueName: \"kubernetes.io/projected/9627588f-8915-4b95-b0df-ef9e7abb9009-kube-api-access-n4z2x\") pod \"9627588f-8915-4b95-b0df-ef9e7abb9009\" (UID: \"9627588f-8915-4b95-b0df-ef9e7abb9009\") " Jan 07 04:37:59 crc kubenswrapper[4980]: I0107 04:37:59.547105 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9627588f-8915-4b95-b0df-ef9e7abb9009-must-gather-output\") pod \"9627588f-8915-4b95-b0df-ef9e7abb9009\" (UID: \"9627588f-8915-4b95-b0df-ef9e7abb9009\") " Jan 07 04:37:59 crc kubenswrapper[4980]: I0107 04:37:59.556773 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9627588f-8915-4b95-b0df-ef9e7abb9009-kube-api-access-n4z2x" (OuterVolumeSpecName: "kube-api-access-n4z2x") pod "9627588f-8915-4b95-b0df-ef9e7abb9009" (UID: "9627588f-8915-4b95-b0df-ef9e7abb9009"). InnerVolumeSpecName "kube-api-access-n4z2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:37:59 crc kubenswrapper[4980]: I0107 04:37:59.650160 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4z2x\" (UniqueName: \"kubernetes.io/projected/9627588f-8915-4b95-b0df-ef9e7abb9009-kube-api-access-n4z2x\") on node \"crc\" DevicePath \"\"" Jan 07 04:37:59 crc kubenswrapper[4980]: I0107 04:37:59.702490 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9627588f-8915-4b95-b0df-ef9e7abb9009-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9627588f-8915-4b95-b0df-ef9e7abb9009" (UID: "9627588f-8915-4b95-b0df-ef9e7abb9009"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:37:59 crc kubenswrapper[4980]: I0107 04:37:59.750328 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9627588f-8915-4b95-b0df-ef9e7abb9009" path="/var/lib/kubelet/pods/9627588f-8915-4b95-b0df-ef9e7abb9009/volumes" Jan 07 04:37:59 crc kubenswrapper[4980]: I0107 04:37:59.752687 4980 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9627588f-8915-4b95-b0df-ef9e7abb9009-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 07 04:38:00 crc kubenswrapper[4980]: I0107 04:38:00.299649 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ql5tl_must-gather-786sv_9627588f-8915-4b95-b0df-ef9e7abb9009/copy/0.log" Jan 07 04:38:00 crc kubenswrapper[4980]: I0107 04:38:00.300366 4980 scope.go:117] "RemoveContainer" containerID="491354541b664ff9f4a1308d7a7840cab7e9f815366aa67b7d7b9628dae8fd26" Jan 07 04:38:00 crc kubenswrapper[4980]: I0107 04:38:00.300376 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ql5tl/must-gather-786sv" Jan 07 04:38:00 crc kubenswrapper[4980]: I0107 04:38:00.332489 4980 scope.go:117] "RemoveContainer" containerID="3e9a9f3ce8302a04b80596ce13f668a8b7e670fdb9e9ae871bc285fcd3f4fdee" Jan 07 04:38:00 crc kubenswrapper[4980]: I0107 04:38:00.391425 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h2g4f" Jan 07 04:38:00 crc kubenswrapper[4980]: I0107 04:38:00.451195 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2g4f"] Jan 07 04:38:02 crc kubenswrapper[4980]: I0107 04:38:02.321990 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h2g4f" podUID="d381ae9b-32da-4f00-9129-6cd97551a9c6" containerName="registry-server" containerID="cri-o://577b1d70bfd2188f7d5b1b2ac6257e97347ce7b11274242efe92d9510fd9881f" gracePeriod=2 Jan 07 04:38:02 crc kubenswrapper[4980]: I0107 04:38:02.883288 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2g4f" Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.014475 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d381ae9b-32da-4f00-9129-6cd97551a9c6-utilities\") pod \"d381ae9b-32da-4f00-9129-6cd97551a9c6\" (UID: \"d381ae9b-32da-4f00-9129-6cd97551a9c6\") " Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.014591 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d381ae9b-32da-4f00-9129-6cd97551a9c6-catalog-content\") pod \"d381ae9b-32da-4f00-9129-6cd97551a9c6\" (UID: \"d381ae9b-32da-4f00-9129-6cd97551a9c6\") " Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.014655 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzr6f\" (UniqueName: \"kubernetes.io/projected/d381ae9b-32da-4f00-9129-6cd97551a9c6-kube-api-access-jzr6f\") pod \"d381ae9b-32da-4f00-9129-6cd97551a9c6\" (UID: \"d381ae9b-32da-4f00-9129-6cd97551a9c6\") " Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.015743 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d381ae9b-32da-4f00-9129-6cd97551a9c6-utilities" (OuterVolumeSpecName: "utilities") pod "d381ae9b-32da-4f00-9129-6cd97551a9c6" (UID: "d381ae9b-32da-4f00-9129-6cd97551a9c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.024699 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d381ae9b-32da-4f00-9129-6cd97551a9c6-kube-api-access-jzr6f" (OuterVolumeSpecName: "kube-api-access-jzr6f") pod "d381ae9b-32da-4f00-9129-6cd97551a9c6" (UID: "d381ae9b-32da-4f00-9129-6cd97551a9c6"). InnerVolumeSpecName "kube-api-access-jzr6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.042008 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d381ae9b-32da-4f00-9129-6cd97551a9c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d381ae9b-32da-4f00-9129-6cd97551a9c6" (UID: "d381ae9b-32da-4f00-9129-6cd97551a9c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.116967 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d381ae9b-32da-4f00-9129-6cd97551a9c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.116999 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzr6f\" (UniqueName: \"kubernetes.io/projected/d381ae9b-32da-4f00-9129-6cd97551a9c6-kube-api-access-jzr6f\") on node \"crc\" DevicePath \"\"" Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.117008 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d381ae9b-32da-4f00-9129-6cd97551a9c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.339840 4980 generic.go:334] "Generic (PLEG): container finished" podID="d381ae9b-32da-4f00-9129-6cd97551a9c6" containerID="577b1d70bfd2188f7d5b1b2ac6257e97347ce7b11274242efe92d9510fd9881f" exitCode=0 Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.339898 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2g4f" event={"ID":"d381ae9b-32da-4f00-9129-6cd97551a9c6","Type":"ContainerDied","Data":"577b1d70bfd2188f7d5b1b2ac6257e97347ce7b11274242efe92d9510fd9881f"} Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.339936 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2g4f" event={"ID":"d381ae9b-32da-4f00-9129-6cd97551a9c6","Type":"ContainerDied","Data":"7b45edb094b5b3b91b6ad519b1b344b794e2bcb32b5d19807be4dc1abaaecdf5"} Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.339966 4980 scope.go:117] "RemoveContainer" containerID="577b1d70bfd2188f7d5b1b2ac6257e97347ce7b11274242efe92d9510fd9881f" Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.340165 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2g4f" Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.373795 4980 scope.go:117] "RemoveContainer" containerID="044fccf924703d1b0342dc70714e46848f7ded217caa28e0b31caa5261045cfa" Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.404800 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2g4f"] Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.416105 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2g4f"] Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.418972 4980 scope.go:117] "RemoveContainer" containerID="2f6bc2473cf44821bf2a2fad94a0a0515f175c639d8e69acfc819625ee3bacf4" Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.471357 4980 scope.go:117] "RemoveContainer" containerID="577b1d70bfd2188f7d5b1b2ac6257e97347ce7b11274242efe92d9510fd9881f" Jan 07 04:38:03 crc kubenswrapper[4980]: E0107 04:38:03.472389 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"577b1d70bfd2188f7d5b1b2ac6257e97347ce7b11274242efe92d9510fd9881f\": container with ID starting with 577b1d70bfd2188f7d5b1b2ac6257e97347ce7b11274242efe92d9510fd9881f not found: ID does not exist" containerID="577b1d70bfd2188f7d5b1b2ac6257e97347ce7b11274242efe92d9510fd9881f" Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.472636 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"577b1d70bfd2188f7d5b1b2ac6257e97347ce7b11274242efe92d9510fd9881f"} err="failed to get container status \"577b1d70bfd2188f7d5b1b2ac6257e97347ce7b11274242efe92d9510fd9881f\": rpc error: code = NotFound desc = could not find container \"577b1d70bfd2188f7d5b1b2ac6257e97347ce7b11274242efe92d9510fd9881f\": container with ID starting with 577b1d70bfd2188f7d5b1b2ac6257e97347ce7b11274242efe92d9510fd9881f not found: ID does not exist" Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.472819 4980 scope.go:117] "RemoveContainer" containerID="044fccf924703d1b0342dc70714e46848f7ded217caa28e0b31caa5261045cfa" Jan 07 04:38:03 crc kubenswrapper[4980]: E0107 04:38:03.473638 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"044fccf924703d1b0342dc70714e46848f7ded217caa28e0b31caa5261045cfa\": container with ID starting with 044fccf924703d1b0342dc70714e46848f7ded217caa28e0b31caa5261045cfa not found: ID does not exist" containerID="044fccf924703d1b0342dc70714e46848f7ded217caa28e0b31caa5261045cfa" Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.473703 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"044fccf924703d1b0342dc70714e46848f7ded217caa28e0b31caa5261045cfa"} err="failed to get container status \"044fccf924703d1b0342dc70714e46848f7ded217caa28e0b31caa5261045cfa\": rpc error: code = NotFound desc = could not find container \"044fccf924703d1b0342dc70714e46848f7ded217caa28e0b31caa5261045cfa\": container with ID starting with 044fccf924703d1b0342dc70714e46848f7ded217caa28e0b31caa5261045cfa not found: ID does not exist" Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.473755 4980 scope.go:117] "RemoveContainer" containerID="2f6bc2473cf44821bf2a2fad94a0a0515f175c639d8e69acfc819625ee3bacf4" Jan 07 04:38:03 crc kubenswrapper[4980]: E0107 04:38:03.481093 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f6bc2473cf44821bf2a2fad94a0a0515f175c639d8e69acfc819625ee3bacf4\": container with ID starting with 2f6bc2473cf44821bf2a2fad94a0a0515f175c639d8e69acfc819625ee3bacf4 not found: ID does not exist" containerID="2f6bc2473cf44821bf2a2fad94a0a0515f175c639d8e69acfc819625ee3bacf4" Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.481240 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f6bc2473cf44821bf2a2fad94a0a0515f175c639d8e69acfc819625ee3bacf4"} err="failed to get container status \"2f6bc2473cf44821bf2a2fad94a0a0515f175c639d8e69acfc819625ee3bacf4\": rpc error: code = NotFound desc = could not find container \"2f6bc2473cf44821bf2a2fad94a0a0515f175c639d8e69acfc819625ee3bacf4\": container with ID starting with 2f6bc2473cf44821bf2a2fad94a0a0515f175c639d8e69acfc819625ee3bacf4 not found: ID does not exist" Jan 07 04:38:03 crc kubenswrapper[4980]: I0107 04:38:03.755318 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d381ae9b-32da-4f00-9129-6cd97551a9c6" path="/var/lib/kubelet/pods/d381ae9b-32da-4f00-9129-6cd97551a9c6/volumes" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.076316 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-njcjg"] Jan 07 04:39:06 crc kubenswrapper[4980]: E0107 04:39:06.077368 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d381ae9b-32da-4f00-9129-6cd97551a9c6" containerName="extract-content" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.077384 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="d381ae9b-32da-4f00-9129-6cd97551a9c6" containerName="extract-content" Jan 07 04:39:06 crc kubenswrapper[4980]: E0107 04:39:06.077406 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9627588f-8915-4b95-b0df-ef9e7abb9009" containerName="gather" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.077414 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="9627588f-8915-4b95-b0df-ef9e7abb9009" containerName="gather" Jan 07 04:39:06 crc kubenswrapper[4980]: E0107 04:39:06.077444 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d381ae9b-32da-4f00-9129-6cd97551a9c6" containerName="extract-utilities" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.077453 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="d381ae9b-32da-4f00-9129-6cd97551a9c6" containerName="extract-utilities" Jan 07 04:39:06 crc kubenswrapper[4980]: E0107 04:39:06.077477 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d381ae9b-32da-4f00-9129-6cd97551a9c6" containerName="registry-server" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.077487 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="d381ae9b-32da-4f00-9129-6cd97551a9c6" containerName="registry-server" Jan 07 04:39:06 crc kubenswrapper[4980]: E0107 04:39:06.077506 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9627588f-8915-4b95-b0df-ef9e7abb9009" containerName="copy" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.077514 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="9627588f-8915-4b95-b0df-ef9e7abb9009" containerName="copy" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.077742 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="9627588f-8915-4b95-b0df-ef9e7abb9009" containerName="gather" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.077767 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="d381ae9b-32da-4f00-9129-6cd97551a9c6" containerName="registry-server" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.077782 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="9627588f-8915-4b95-b0df-ef9e7abb9009" containerName="copy" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.079509 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njcjg" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.091842 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-njcjg"] Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.267962 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-949d9\" (UniqueName: \"kubernetes.io/projected/59e3c2ae-f14a-431e-8ab4-9ca460728cf7-kube-api-access-949d9\") pod \"redhat-operators-njcjg\" (UID: \"59e3c2ae-f14a-431e-8ab4-9ca460728cf7\") " pod="openshift-marketplace/redhat-operators-njcjg" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.268282 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59e3c2ae-f14a-431e-8ab4-9ca460728cf7-catalog-content\") pod \"redhat-operators-njcjg\" (UID: \"59e3c2ae-f14a-431e-8ab4-9ca460728cf7\") " pod="openshift-marketplace/redhat-operators-njcjg" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.268328 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59e3c2ae-f14a-431e-8ab4-9ca460728cf7-utilities\") pod \"redhat-operators-njcjg\" (UID: \"59e3c2ae-f14a-431e-8ab4-9ca460728cf7\") " pod="openshift-marketplace/redhat-operators-njcjg" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.369965 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-949d9\" (UniqueName: \"kubernetes.io/projected/59e3c2ae-f14a-431e-8ab4-9ca460728cf7-kube-api-access-949d9\") pod \"redhat-operators-njcjg\" (UID: \"59e3c2ae-f14a-431e-8ab4-9ca460728cf7\") " pod="openshift-marketplace/redhat-operators-njcjg" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.370122 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59e3c2ae-f14a-431e-8ab4-9ca460728cf7-catalog-content\") pod \"redhat-operators-njcjg\" (UID: \"59e3c2ae-f14a-431e-8ab4-9ca460728cf7\") " pod="openshift-marketplace/redhat-operators-njcjg" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.370144 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59e3c2ae-f14a-431e-8ab4-9ca460728cf7-utilities\") pod \"redhat-operators-njcjg\" (UID: \"59e3c2ae-f14a-431e-8ab4-9ca460728cf7\") " pod="openshift-marketplace/redhat-operators-njcjg" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.370589 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59e3c2ae-f14a-431e-8ab4-9ca460728cf7-catalog-content\") pod \"redhat-operators-njcjg\" (UID: \"59e3c2ae-f14a-431e-8ab4-9ca460728cf7\") " pod="openshift-marketplace/redhat-operators-njcjg" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.370616 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59e3c2ae-f14a-431e-8ab4-9ca460728cf7-utilities\") pod \"redhat-operators-njcjg\" (UID: \"59e3c2ae-f14a-431e-8ab4-9ca460728cf7\") " pod="openshift-marketplace/redhat-operators-njcjg" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.388590 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-949d9\" (UniqueName: \"kubernetes.io/projected/59e3c2ae-f14a-431e-8ab4-9ca460728cf7-kube-api-access-949d9\") pod \"redhat-operators-njcjg\" (UID: \"59e3c2ae-f14a-431e-8ab4-9ca460728cf7\") " pod="openshift-marketplace/redhat-operators-njcjg" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.420985 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njcjg" Jan 07 04:39:06 crc kubenswrapper[4980]: I0107 04:39:06.926114 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-njcjg"] Jan 07 04:39:06 crc kubenswrapper[4980]: W0107 04:39:06.928295 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59e3c2ae_f14a_431e_8ab4_9ca460728cf7.slice/crio-73dc732e370224c2ca9dc52af9dc32d88a8b8a0e0e6eb62cf72bd81467ad7436 WatchSource:0}: Error finding container 73dc732e370224c2ca9dc52af9dc32d88a8b8a0e0e6eb62cf72bd81467ad7436: Status 404 returned error can't find the container with id 73dc732e370224c2ca9dc52af9dc32d88a8b8a0e0e6eb62cf72bd81467ad7436 Jan 07 04:39:07 crc kubenswrapper[4980]: I0107 04:39:07.046467 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njcjg" event={"ID":"59e3c2ae-f14a-431e-8ab4-9ca460728cf7","Type":"ContainerStarted","Data":"73dc732e370224c2ca9dc52af9dc32d88a8b8a0e0e6eb62cf72bd81467ad7436"} Jan 07 04:39:08 crc kubenswrapper[4980]: I0107 04:39:08.060195 4980 generic.go:334] "Generic (PLEG): container finished" podID="59e3c2ae-f14a-431e-8ab4-9ca460728cf7" containerID="7ae7d252079037ec4e494878ba4a5373c4b6884d290066c959a7121283f23c72" exitCode=0 Jan 07 04:39:08 crc kubenswrapper[4980]: I0107 04:39:08.060291 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njcjg" event={"ID":"59e3c2ae-f14a-431e-8ab4-9ca460728cf7","Type":"ContainerDied","Data":"7ae7d252079037ec4e494878ba4a5373c4b6884d290066c959a7121283f23c72"} Jan 07 04:39:09 crc kubenswrapper[4980]: I0107 04:39:09.074935 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njcjg" event={"ID":"59e3c2ae-f14a-431e-8ab4-9ca460728cf7","Type":"ContainerStarted","Data":"38fd91ff4a51390c72a3a2b6ca1ee1bde3b36e745b4e2ef07396f8a4231d6942"} Jan 07 04:39:10 crc kubenswrapper[4980]: I0107 04:39:10.089957 4980 generic.go:334] "Generic (PLEG): container finished" podID="59e3c2ae-f14a-431e-8ab4-9ca460728cf7" containerID="38fd91ff4a51390c72a3a2b6ca1ee1bde3b36e745b4e2ef07396f8a4231d6942" exitCode=0 Jan 07 04:39:10 crc kubenswrapper[4980]: I0107 04:39:10.090024 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njcjg" event={"ID":"59e3c2ae-f14a-431e-8ab4-9ca460728cf7","Type":"ContainerDied","Data":"38fd91ff4a51390c72a3a2b6ca1ee1bde3b36e745b4e2ef07396f8a4231d6942"} Jan 07 04:39:11 crc kubenswrapper[4980]: I0107 04:39:11.107352 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njcjg" event={"ID":"59e3c2ae-f14a-431e-8ab4-9ca460728cf7","Type":"ContainerStarted","Data":"4f368d5631fa6adada16c55a450646e9cd069ec80cfadb2643fc37a9a7969bd1"} Jan 07 04:39:11 crc kubenswrapper[4980]: I0107 04:39:11.136222 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-njcjg" podStartSLOduration=2.593908087 podStartE2EDuration="5.136202908s" podCreationTimestamp="2026-01-07 04:39:06 +0000 UTC" firstStartedPulling="2026-01-07 04:39:08.063864528 +0000 UTC m=+3994.629559263" lastFinishedPulling="2026-01-07 04:39:10.606159319 +0000 UTC m=+3997.171854084" observedRunningTime="2026-01-07 04:39:11.131455533 +0000 UTC m=+3997.697150278" watchObservedRunningTime="2026-01-07 04:39:11.136202908 +0000 UTC m=+3997.701897643" Jan 07 04:39:16 crc kubenswrapper[4980]: I0107 04:39:16.422029 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-njcjg" Jan 07 04:39:16 crc kubenswrapper[4980]: I0107 04:39:16.422723 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-njcjg" Jan 07 04:39:17 crc kubenswrapper[4980]: I0107 04:39:17.480091 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-njcjg" podUID="59e3c2ae-f14a-431e-8ab4-9ca460728cf7" containerName="registry-server" probeResult="failure" output=< Jan 07 04:39:17 crc kubenswrapper[4980]: timeout: failed to connect service ":50051" within 1s Jan 07 04:39:17 crc kubenswrapper[4980]: > Jan 07 04:39:26 crc kubenswrapper[4980]: I0107 04:39:26.508510 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-njcjg" Jan 07 04:39:26 crc kubenswrapper[4980]: I0107 04:39:26.598314 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-njcjg" Jan 07 04:39:26 crc kubenswrapper[4980]: I0107 04:39:26.761594 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-njcjg"] Jan 07 04:39:28 crc kubenswrapper[4980]: I0107 04:39:28.290861 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-njcjg" podUID="59e3c2ae-f14a-431e-8ab4-9ca460728cf7" containerName="registry-server" containerID="cri-o://4f368d5631fa6adada16c55a450646e9cd069ec80cfadb2643fc37a9a7969bd1" gracePeriod=2 Jan 07 04:39:28 crc kubenswrapper[4980]: I0107 04:39:28.822730 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njcjg" Jan 07 04:39:28 crc kubenswrapper[4980]: I0107 04:39:28.883778 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59e3c2ae-f14a-431e-8ab4-9ca460728cf7-catalog-content\") pod \"59e3c2ae-f14a-431e-8ab4-9ca460728cf7\" (UID: \"59e3c2ae-f14a-431e-8ab4-9ca460728cf7\") " Jan 07 04:39:28 crc kubenswrapper[4980]: I0107 04:39:28.883966 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59e3c2ae-f14a-431e-8ab4-9ca460728cf7-utilities\") pod \"59e3c2ae-f14a-431e-8ab4-9ca460728cf7\" (UID: \"59e3c2ae-f14a-431e-8ab4-9ca460728cf7\") " Jan 07 04:39:28 crc kubenswrapper[4980]: I0107 04:39:28.884097 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-949d9\" (UniqueName: \"kubernetes.io/projected/59e3c2ae-f14a-431e-8ab4-9ca460728cf7-kube-api-access-949d9\") pod \"59e3c2ae-f14a-431e-8ab4-9ca460728cf7\" (UID: \"59e3c2ae-f14a-431e-8ab4-9ca460728cf7\") " Jan 07 04:39:28 crc kubenswrapper[4980]: I0107 04:39:28.886128 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59e3c2ae-f14a-431e-8ab4-9ca460728cf7-utilities" (OuterVolumeSpecName: "utilities") pod "59e3c2ae-f14a-431e-8ab4-9ca460728cf7" (UID: "59e3c2ae-f14a-431e-8ab4-9ca460728cf7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:39:28 crc kubenswrapper[4980]: I0107 04:39:28.893352 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59e3c2ae-f14a-431e-8ab4-9ca460728cf7-kube-api-access-949d9" (OuterVolumeSpecName: "kube-api-access-949d9") pod "59e3c2ae-f14a-431e-8ab4-9ca460728cf7" (UID: "59e3c2ae-f14a-431e-8ab4-9ca460728cf7"). InnerVolumeSpecName "kube-api-access-949d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:39:28 crc kubenswrapper[4980]: I0107 04:39:28.986812 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59e3c2ae-f14a-431e-8ab4-9ca460728cf7-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 04:39:28 crc kubenswrapper[4980]: I0107 04:39:28.987135 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-949d9\" (UniqueName: \"kubernetes.io/projected/59e3c2ae-f14a-431e-8ab4-9ca460728cf7-kube-api-access-949d9\") on node \"crc\" DevicePath \"\"" Jan 07 04:39:29 crc kubenswrapper[4980]: I0107 04:39:29.013873 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59e3c2ae-f14a-431e-8ab4-9ca460728cf7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59e3c2ae-f14a-431e-8ab4-9ca460728cf7" (UID: "59e3c2ae-f14a-431e-8ab4-9ca460728cf7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:39:29 crc kubenswrapper[4980]: I0107 04:39:29.089191 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59e3c2ae-f14a-431e-8ab4-9ca460728cf7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 04:39:29 crc kubenswrapper[4980]: I0107 04:39:29.307600 4980 generic.go:334] "Generic (PLEG): container finished" podID="59e3c2ae-f14a-431e-8ab4-9ca460728cf7" containerID="4f368d5631fa6adada16c55a450646e9cd069ec80cfadb2643fc37a9a7969bd1" exitCode=0 Jan 07 04:39:29 crc kubenswrapper[4980]: I0107 04:39:29.307667 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njcjg" event={"ID":"59e3c2ae-f14a-431e-8ab4-9ca460728cf7","Type":"ContainerDied","Data":"4f368d5631fa6adada16c55a450646e9cd069ec80cfadb2643fc37a9a7969bd1"} Jan 07 04:39:29 crc kubenswrapper[4980]: I0107 04:39:29.307711 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njcjg" event={"ID":"59e3c2ae-f14a-431e-8ab4-9ca460728cf7","Type":"ContainerDied","Data":"73dc732e370224c2ca9dc52af9dc32d88a8b8a0e0e6eb62cf72bd81467ad7436"} Jan 07 04:39:29 crc kubenswrapper[4980]: I0107 04:39:29.307744 4980 scope.go:117] "RemoveContainer" containerID="4f368d5631fa6adada16c55a450646e9cd069ec80cfadb2643fc37a9a7969bd1" Jan 07 04:39:29 crc kubenswrapper[4980]: I0107 04:39:29.308004 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njcjg" Jan 07 04:39:29 crc kubenswrapper[4980]: I0107 04:39:29.371278 4980 scope.go:117] "RemoveContainer" containerID="38fd91ff4a51390c72a3a2b6ca1ee1bde3b36e745b4e2ef07396f8a4231d6942" Jan 07 04:39:29 crc kubenswrapper[4980]: I0107 04:39:29.373308 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-njcjg"] Jan 07 04:39:29 crc kubenswrapper[4980]: I0107 04:39:29.381718 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-njcjg"] Jan 07 04:39:29 crc kubenswrapper[4980]: I0107 04:39:29.404188 4980 scope.go:117] "RemoveContainer" containerID="7ae7d252079037ec4e494878ba4a5373c4b6884d290066c959a7121283f23c72" Jan 07 04:39:29 crc kubenswrapper[4980]: I0107 04:39:29.456153 4980 scope.go:117] "RemoveContainer" containerID="4f368d5631fa6adada16c55a450646e9cd069ec80cfadb2643fc37a9a7969bd1" Jan 07 04:39:29 crc kubenswrapper[4980]: E0107 04:39:29.456845 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f368d5631fa6adada16c55a450646e9cd069ec80cfadb2643fc37a9a7969bd1\": container with ID starting with 4f368d5631fa6adada16c55a450646e9cd069ec80cfadb2643fc37a9a7969bd1 not found: ID does not exist" containerID="4f368d5631fa6adada16c55a450646e9cd069ec80cfadb2643fc37a9a7969bd1" Jan 07 04:39:29 crc kubenswrapper[4980]: I0107 04:39:29.456906 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f368d5631fa6adada16c55a450646e9cd069ec80cfadb2643fc37a9a7969bd1"} err="failed to get container status \"4f368d5631fa6adada16c55a450646e9cd069ec80cfadb2643fc37a9a7969bd1\": rpc error: code = NotFound desc = could not find container \"4f368d5631fa6adada16c55a450646e9cd069ec80cfadb2643fc37a9a7969bd1\": container with ID starting with 4f368d5631fa6adada16c55a450646e9cd069ec80cfadb2643fc37a9a7969bd1 not found: ID does not exist" Jan 07 04:39:29 crc kubenswrapper[4980]: I0107 04:39:29.456932 4980 scope.go:117] "RemoveContainer" containerID="38fd91ff4a51390c72a3a2b6ca1ee1bde3b36e745b4e2ef07396f8a4231d6942" Jan 07 04:39:29 crc kubenswrapper[4980]: E0107 04:39:29.457479 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38fd91ff4a51390c72a3a2b6ca1ee1bde3b36e745b4e2ef07396f8a4231d6942\": container with ID starting with 38fd91ff4a51390c72a3a2b6ca1ee1bde3b36e745b4e2ef07396f8a4231d6942 not found: ID does not exist" containerID="38fd91ff4a51390c72a3a2b6ca1ee1bde3b36e745b4e2ef07396f8a4231d6942" Jan 07 04:39:29 crc kubenswrapper[4980]: I0107 04:39:29.457547 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38fd91ff4a51390c72a3a2b6ca1ee1bde3b36e745b4e2ef07396f8a4231d6942"} err="failed to get container status \"38fd91ff4a51390c72a3a2b6ca1ee1bde3b36e745b4e2ef07396f8a4231d6942\": rpc error: code = NotFound desc = could not find container \"38fd91ff4a51390c72a3a2b6ca1ee1bde3b36e745b4e2ef07396f8a4231d6942\": container with ID starting with 38fd91ff4a51390c72a3a2b6ca1ee1bde3b36e745b4e2ef07396f8a4231d6942 not found: ID does not exist" Jan 07 04:39:29 crc kubenswrapper[4980]: I0107 04:39:29.457624 4980 scope.go:117] "RemoveContainer" containerID="7ae7d252079037ec4e494878ba4a5373c4b6884d290066c959a7121283f23c72" Jan 07 04:39:29 crc kubenswrapper[4980]: E0107 04:39:29.458074 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ae7d252079037ec4e494878ba4a5373c4b6884d290066c959a7121283f23c72\": container with ID starting with 7ae7d252079037ec4e494878ba4a5373c4b6884d290066c959a7121283f23c72 not found: ID does not exist" containerID="7ae7d252079037ec4e494878ba4a5373c4b6884d290066c959a7121283f23c72" Jan 07 04:39:29 crc kubenswrapper[4980]: I0107 04:39:29.458107 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ae7d252079037ec4e494878ba4a5373c4b6884d290066c959a7121283f23c72"} err="failed to get container status \"7ae7d252079037ec4e494878ba4a5373c4b6884d290066c959a7121283f23c72\": rpc error: code = NotFound desc = could not find container \"7ae7d252079037ec4e494878ba4a5373c4b6884d290066c959a7121283f23c72\": container with ID starting with 7ae7d252079037ec4e494878ba4a5373c4b6884d290066c959a7121283f23c72 not found: ID does not exist" Jan 07 04:39:29 crc kubenswrapper[4980]: I0107 04:39:29.759372 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59e3c2ae-f14a-431e-8ab4-9ca460728cf7" path="/var/lib/kubelet/pods/59e3c2ae-f14a-431e-8ab4-9ca460728cf7/volumes" Jan 07 04:39:42 crc kubenswrapper[4980]: I0107 04:39:42.963805 4980 scope.go:117] "RemoveContainer" containerID="f4fa38a92024239ce715f705f3d3a5a490f2ed7a8246469322dd4b55f2078ad4" Jan 07 04:40:06 crc kubenswrapper[4980]: I0107 04:40:06.543867 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:40:06 crc kubenswrapper[4980]: I0107 04:40:06.544642 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:40:36 crc kubenswrapper[4980]: I0107 04:40:36.543539 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:40:36 crc kubenswrapper[4980]: I0107 04:40:36.544066 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:40:43 crc kubenswrapper[4980]: I0107 04:40:43.065970 4980 scope.go:117] "RemoveContainer" containerID="ba26f10bb2cad80ec1fa0b8cd4f18da245286d9c7264600b1c4737a0c26e09c7" Jan 07 04:40:50 crc kubenswrapper[4980]: I0107 04:40:50.294810 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wwgmc/must-gather-qvxjz"] Jan 07 04:40:50 crc kubenswrapper[4980]: E0107 04:40:50.295794 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e3c2ae-f14a-431e-8ab4-9ca460728cf7" containerName="registry-server" Jan 07 04:40:50 crc kubenswrapper[4980]: I0107 04:40:50.295808 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e3c2ae-f14a-431e-8ab4-9ca460728cf7" containerName="registry-server" Jan 07 04:40:50 crc kubenswrapper[4980]: E0107 04:40:50.295834 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e3c2ae-f14a-431e-8ab4-9ca460728cf7" containerName="extract-utilities" Jan 07 04:40:50 crc kubenswrapper[4980]: I0107 04:40:50.295840 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e3c2ae-f14a-431e-8ab4-9ca460728cf7" containerName="extract-utilities" Jan 07 04:40:50 crc kubenswrapper[4980]: E0107 04:40:50.295859 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e3c2ae-f14a-431e-8ab4-9ca460728cf7" containerName="extract-content" Jan 07 04:40:50 crc kubenswrapper[4980]: I0107 04:40:50.295865 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e3c2ae-f14a-431e-8ab4-9ca460728cf7" containerName="extract-content" Jan 07 04:40:50 crc kubenswrapper[4980]: I0107 04:40:50.296051 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e3c2ae-f14a-431e-8ab4-9ca460728cf7" containerName="registry-server" Jan 07 04:40:50 crc kubenswrapper[4980]: I0107 04:40:50.296908 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwgmc/must-gather-qvxjz" Jan 07 04:40:50 crc kubenswrapper[4980]: I0107 04:40:50.301021 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wwgmc"/"kube-root-ca.crt" Jan 07 04:40:50 crc kubenswrapper[4980]: I0107 04:40:50.301257 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wwgmc"/"openshift-service-ca.crt" Jan 07 04:40:50 crc kubenswrapper[4980]: I0107 04:40:50.341595 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wwgmc/must-gather-qvxjz"] Jan 07 04:40:50 crc kubenswrapper[4980]: I0107 04:40:50.452252 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/859120b5-d837-4e48-ac1a-c8090e7793e0-must-gather-output\") pod \"must-gather-qvxjz\" (UID: \"859120b5-d837-4e48-ac1a-c8090e7793e0\") " pod="openshift-must-gather-wwgmc/must-gather-qvxjz" Jan 07 04:40:50 crc kubenswrapper[4980]: I0107 04:40:50.452380 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72swr\" (UniqueName: \"kubernetes.io/projected/859120b5-d837-4e48-ac1a-c8090e7793e0-kube-api-access-72swr\") pod \"must-gather-qvxjz\" (UID: \"859120b5-d837-4e48-ac1a-c8090e7793e0\") " pod="openshift-must-gather-wwgmc/must-gather-qvxjz" Jan 07 04:40:50 crc kubenswrapper[4980]: I0107 04:40:50.554200 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/859120b5-d837-4e48-ac1a-c8090e7793e0-must-gather-output\") pod \"must-gather-qvxjz\" (UID: \"859120b5-d837-4e48-ac1a-c8090e7793e0\") " pod="openshift-must-gather-wwgmc/must-gather-qvxjz" Jan 07 04:40:50 crc kubenswrapper[4980]: I0107 04:40:50.554348 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72swr\" (UniqueName: \"kubernetes.io/projected/859120b5-d837-4e48-ac1a-c8090e7793e0-kube-api-access-72swr\") pod \"must-gather-qvxjz\" (UID: \"859120b5-d837-4e48-ac1a-c8090e7793e0\") " pod="openshift-must-gather-wwgmc/must-gather-qvxjz" Jan 07 04:40:50 crc kubenswrapper[4980]: I0107 04:40:50.555041 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/859120b5-d837-4e48-ac1a-c8090e7793e0-must-gather-output\") pod \"must-gather-qvxjz\" (UID: \"859120b5-d837-4e48-ac1a-c8090e7793e0\") " pod="openshift-must-gather-wwgmc/must-gather-qvxjz" Jan 07 04:40:50 crc kubenswrapper[4980]: I0107 04:40:50.576372 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72swr\" (UniqueName: \"kubernetes.io/projected/859120b5-d837-4e48-ac1a-c8090e7793e0-kube-api-access-72swr\") pod \"must-gather-qvxjz\" (UID: \"859120b5-d837-4e48-ac1a-c8090e7793e0\") " pod="openshift-must-gather-wwgmc/must-gather-qvxjz" Jan 07 04:40:50 crc kubenswrapper[4980]: I0107 04:40:50.618170 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwgmc/must-gather-qvxjz" Jan 07 04:40:51 crc kubenswrapper[4980]: I0107 04:40:51.118768 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wwgmc/must-gather-qvxjz"] Jan 07 04:40:51 crc kubenswrapper[4980]: I0107 04:40:51.279118 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwgmc/must-gather-qvxjz" event={"ID":"859120b5-d837-4e48-ac1a-c8090e7793e0","Type":"ContainerStarted","Data":"b5a81b76bed63f34e6c4526c8e70eeb3a23de7a31ae46ae578aab0c2d49cf1a0"} Jan 07 04:40:52 crc kubenswrapper[4980]: I0107 04:40:52.308078 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwgmc/must-gather-qvxjz" event={"ID":"859120b5-d837-4e48-ac1a-c8090e7793e0","Type":"ContainerStarted","Data":"9c5b8bb791fbf04e6f92096fb6a8cc38f32ff4b3362457b2ddc880e9d6ea4c33"} Jan 07 04:40:52 crc kubenswrapper[4980]: I0107 04:40:52.308386 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwgmc/must-gather-qvxjz" event={"ID":"859120b5-d837-4e48-ac1a-c8090e7793e0","Type":"ContainerStarted","Data":"242073bf2ec7127a6869585d6d0a1bbade886fc25af70b131e20ac7a29b2aab4"} Jan 07 04:40:52 crc kubenswrapper[4980]: I0107 04:40:52.349375 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wwgmc/must-gather-qvxjz" podStartSLOduration=2.349347697 podStartE2EDuration="2.349347697s" podCreationTimestamp="2026-01-07 04:40:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 04:40:52.329643221 +0000 UTC m=+4098.895337986" watchObservedRunningTime="2026-01-07 04:40:52.349347697 +0000 UTC m=+4098.915042472" Jan 07 04:40:55 crc kubenswrapper[4980]: I0107 04:40:55.009438 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wwgmc/crc-debug-dvk6n"] Jan 07 04:40:55 crc kubenswrapper[4980]: I0107 04:40:55.011904 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwgmc/crc-debug-dvk6n" Jan 07 04:40:55 crc kubenswrapper[4980]: I0107 04:40:55.015942 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-wwgmc"/"default-dockercfg-s6twl" Jan 07 04:40:55 crc kubenswrapper[4980]: I0107 04:40:55.148060 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9lpt\" (UniqueName: \"kubernetes.io/projected/1e97c4f2-8114-48c3-a20c-075ea7880c28-kube-api-access-k9lpt\") pod \"crc-debug-dvk6n\" (UID: \"1e97c4f2-8114-48c3-a20c-075ea7880c28\") " pod="openshift-must-gather-wwgmc/crc-debug-dvk6n" Jan 07 04:40:55 crc kubenswrapper[4980]: I0107 04:40:55.148173 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e97c4f2-8114-48c3-a20c-075ea7880c28-host\") pod \"crc-debug-dvk6n\" (UID: \"1e97c4f2-8114-48c3-a20c-075ea7880c28\") " pod="openshift-must-gather-wwgmc/crc-debug-dvk6n" Jan 07 04:40:55 crc kubenswrapper[4980]: I0107 04:40:55.250414 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9lpt\" (UniqueName: \"kubernetes.io/projected/1e97c4f2-8114-48c3-a20c-075ea7880c28-kube-api-access-k9lpt\") pod \"crc-debug-dvk6n\" (UID: \"1e97c4f2-8114-48c3-a20c-075ea7880c28\") " pod="openshift-must-gather-wwgmc/crc-debug-dvk6n" Jan 07 04:40:55 crc kubenswrapper[4980]: I0107 04:40:55.250535 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e97c4f2-8114-48c3-a20c-075ea7880c28-host\") pod \"crc-debug-dvk6n\" (UID: \"1e97c4f2-8114-48c3-a20c-075ea7880c28\") " pod="openshift-must-gather-wwgmc/crc-debug-dvk6n" Jan 07 04:40:55 crc kubenswrapper[4980]: I0107 04:40:55.250766 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e97c4f2-8114-48c3-a20c-075ea7880c28-host\") pod \"crc-debug-dvk6n\" (UID: \"1e97c4f2-8114-48c3-a20c-075ea7880c28\") " pod="openshift-must-gather-wwgmc/crc-debug-dvk6n" Jan 07 04:40:55 crc kubenswrapper[4980]: I0107 04:40:55.271766 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9lpt\" (UniqueName: \"kubernetes.io/projected/1e97c4f2-8114-48c3-a20c-075ea7880c28-kube-api-access-k9lpt\") pod \"crc-debug-dvk6n\" (UID: \"1e97c4f2-8114-48c3-a20c-075ea7880c28\") " pod="openshift-must-gather-wwgmc/crc-debug-dvk6n" Jan 07 04:40:55 crc kubenswrapper[4980]: I0107 04:40:55.346774 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwgmc/crc-debug-dvk6n" Jan 07 04:40:55 crc kubenswrapper[4980]: W0107 04:40:55.391197 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e97c4f2_8114_48c3_a20c_075ea7880c28.slice/crio-de70145a0fe9ee9e008707bea98b016db58ba158ffaaa6ef41133aa088108d33 WatchSource:0}: Error finding container de70145a0fe9ee9e008707bea98b016db58ba158ffaaa6ef41133aa088108d33: Status 404 returned error can't find the container with id de70145a0fe9ee9e008707bea98b016db58ba158ffaaa6ef41133aa088108d33 Jan 07 04:40:56 crc kubenswrapper[4980]: I0107 04:40:56.348299 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwgmc/crc-debug-dvk6n" event={"ID":"1e97c4f2-8114-48c3-a20c-075ea7880c28","Type":"ContainerStarted","Data":"2b370deb3921edc7458d87dad0e8fcaa79e880a1ab03095d1eba6a3ee12530b8"} Jan 07 04:40:56 crc kubenswrapper[4980]: I0107 04:40:56.349286 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwgmc/crc-debug-dvk6n" event={"ID":"1e97c4f2-8114-48c3-a20c-075ea7880c28","Type":"ContainerStarted","Data":"de70145a0fe9ee9e008707bea98b016db58ba158ffaaa6ef41133aa088108d33"} Jan 07 04:40:56 crc kubenswrapper[4980]: I0107 04:40:56.375671 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wwgmc/crc-debug-dvk6n" podStartSLOduration=2.375645907 podStartE2EDuration="2.375645907s" podCreationTimestamp="2026-01-07 04:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 04:40:56.367146056 +0000 UTC m=+4102.932840791" watchObservedRunningTime="2026-01-07 04:40:56.375645907 +0000 UTC m=+4102.941340662" Jan 07 04:41:06 crc kubenswrapper[4980]: I0107 04:41:06.542941 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:41:06 crc kubenswrapper[4980]: I0107 04:41:06.543442 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:41:06 crc kubenswrapper[4980]: I0107 04:41:06.543488 4980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 04:41:06 crc kubenswrapper[4980]: I0107 04:41:06.544165 4980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"734a74dc3c192c49aa206faa107f7ca49bb64ac582050f82bf6fdf1e63cf5035"} pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 04:41:06 crc kubenswrapper[4980]: I0107 04:41:06.544218 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" containerID="cri-o://734a74dc3c192c49aa206faa107f7ca49bb64ac582050f82bf6fdf1e63cf5035" gracePeriod=600 Jan 07 04:41:07 crc kubenswrapper[4980]: I0107 04:41:07.457728 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerID="734a74dc3c192c49aa206faa107f7ca49bb64ac582050f82bf6fdf1e63cf5035" exitCode=0 Jan 07 04:41:07 crc kubenswrapper[4980]: I0107 04:41:07.457812 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerDied","Data":"734a74dc3c192c49aa206faa107f7ca49bb64ac582050f82bf6fdf1e63cf5035"} Jan 07 04:41:07 crc kubenswrapper[4980]: I0107 04:41:07.458728 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766"} Jan 07 04:41:07 crc kubenswrapper[4980]: I0107 04:41:07.458753 4980 scope.go:117] "RemoveContainer" containerID="8b9e21254d775efd138637fedca49f8de7db69dba119bb76836e227190c72cc3" Jan 07 04:41:26 crc kubenswrapper[4980]: I0107 04:41:26.636515 4980 generic.go:334] "Generic (PLEG): container finished" podID="1e97c4f2-8114-48c3-a20c-075ea7880c28" containerID="2b370deb3921edc7458d87dad0e8fcaa79e880a1ab03095d1eba6a3ee12530b8" exitCode=0 Jan 07 04:41:26 crc kubenswrapper[4980]: I0107 04:41:26.636779 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwgmc/crc-debug-dvk6n" event={"ID":"1e97c4f2-8114-48c3-a20c-075ea7880c28","Type":"ContainerDied","Data":"2b370deb3921edc7458d87dad0e8fcaa79e880a1ab03095d1eba6a3ee12530b8"} Jan 07 04:41:27 crc kubenswrapper[4980]: I0107 04:41:27.746445 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwgmc/crc-debug-dvk6n" Jan 07 04:41:27 crc kubenswrapper[4980]: I0107 04:41:27.794028 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wwgmc/crc-debug-dvk6n"] Jan 07 04:41:27 crc kubenswrapper[4980]: I0107 04:41:27.805301 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wwgmc/crc-debug-dvk6n"] Jan 07 04:41:27 crc kubenswrapper[4980]: I0107 04:41:27.888432 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e97c4f2-8114-48c3-a20c-075ea7880c28-host\") pod \"1e97c4f2-8114-48c3-a20c-075ea7880c28\" (UID: \"1e97c4f2-8114-48c3-a20c-075ea7880c28\") " Jan 07 04:41:27 crc kubenswrapper[4980]: I0107 04:41:27.888503 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9lpt\" (UniqueName: \"kubernetes.io/projected/1e97c4f2-8114-48c3-a20c-075ea7880c28-kube-api-access-k9lpt\") pod \"1e97c4f2-8114-48c3-a20c-075ea7880c28\" (UID: \"1e97c4f2-8114-48c3-a20c-075ea7880c28\") " Jan 07 04:41:27 crc kubenswrapper[4980]: I0107 04:41:27.888549 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e97c4f2-8114-48c3-a20c-075ea7880c28-host" (OuterVolumeSpecName: "host") pod "1e97c4f2-8114-48c3-a20c-075ea7880c28" (UID: "1e97c4f2-8114-48c3-a20c-075ea7880c28"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 04:41:27 crc kubenswrapper[4980]: I0107 04:41:27.888965 4980 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e97c4f2-8114-48c3-a20c-075ea7880c28-host\") on node \"crc\" DevicePath \"\"" Jan 07 04:41:27 crc kubenswrapper[4980]: I0107 04:41:27.898674 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e97c4f2-8114-48c3-a20c-075ea7880c28-kube-api-access-k9lpt" (OuterVolumeSpecName: "kube-api-access-k9lpt") pod "1e97c4f2-8114-48c3-a20c-075ea7880c28" (UID: "1e97c4f2-8114-48c3-a20c-075ea7880c28"). InnerVolumeSpecName "kube-api-access-k9lpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:41:27 crc kubenswrapper[4980]: I0107 04:41:27.991640 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9lpt\" (UniqueName: \"kubernetes.io/projected/1e97c4f2-8114-48c3-a20c-075ea7880c28-kube-api-access-k9lpt\") on node \"crc\" DevicePath \"\"" Jan 07 04:41:28 crc kubenswrapper[4980]: I0107 04:41:28.658878 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de70145a0fe9ee9e008707bea98b016db58ba158ffaaa6ef41133aa088108d33" Jan 07 04:41:28 crc kubenswrapper[4980]: I0107 04:41:28.658944 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwgmc/crc-debug-dvk6n" Jan 07 04:41:28 crc kubenswrapper[4980]: I0107 04:41:28.976487 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wwgmc/crc-debug-72nsl"] Jan 07 04:41:28 crc kubenswrapper[4980]: E0107 04:41:28.976921 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e97c4f2-8114-48c3-a20c-075ea7880c28" containerName="container-00" Jan 07 04:41:28 crc kubenswrapper[4980]: I0107 04:41:28.976936 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e97c4f2-8114-48c3-a20c-075ea7880c28" containerName="container-00" Jan 07 04:41:28 crc kubenswrapper[4980]: I0107 04:41:28.977204 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e97c4f2-8114-48c3-a20c-075ea7880c28" containerName="container-00" Jan 07 04:41:28 crc kubenswrapper[4980]: I0107 04:41:28.977916 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwgmc/crc-debug-72nsl" Jan 07 04:41:28 crc kubenswrapper[4980]: I0107 04:41:28.983592 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-wwgmc"/"default-dockercfg-s6twl" Jan 07 04:41:29 crc kubenswrapper[4980]: I0107 04:41:29.116168 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vxzm\" (UniqueName: \"kubernetes.io/projected/bc0ba207-b8b8-4eca-b229-e7d37dcf2f63-kube-api-access-6vxzm\") pod \"crc-debug-72nsl\" (UID: \"bc0ba207-b8b8-4eca-b229-e7d37dcf2f63\") " pod="openshift-must-gather-wwgmc/crc-debug-72nsl" Jan 07 04:41:29 crc kubenswrapper[4980]: I0107 04:41:29.116507 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc0ba207-b8b8-4eca-b229-e7d37dcf2f63-host\") pod \"crc-debug-72nsl\" (UID: \"bc0ba207-b8b8-4eca-b229-e7d37dcf2f63\") " pod="openshift-must-gather-wwgmc/crc-debug-72nsl" Jan 07 04:41:29 crc kubenswrapper[4980]: I0107 04:41:29.219143 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vxzm\" (UniqueName: \"kubernetes.io/projected/bc0ba207-b8b8-4eca-b229-e7d37dcf2f63-kube-api-access-6vxzm\") pod \"crc-debug-72nsl\" (UID: \"bc0ba207-b8b8-4eca-b229-e7d37dcf2f63\") " pod="openshift-must-gather-wwgmc/crc-debug-72nsl" Jan 07 04:41:29 crc kubenswrapper[4980]: I0107 04:41:29.219248 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc0ba207-b8b8-4eca-b229-e7d37dcf2f63-host\") pod \"crc-debug-72nsl\" (UID: \"bc0ba207-b8b8-4eca-b229-e7d37dcf2f63\") " pod="openshift-must-gather-wwgmc/crc-debug-72nsl" Jan 07 04:41:29 crc kubenswrapper[4980]: I0107 04:41:29.219369 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc0ba207-b8b8-4eca-b229-e7d37dcf2f63-host\") pod \"crc-debug-72nsl\" (UID: \"bc0ba207-b8b8-4eca-b229-e7d37dcf2f63\") " pod="openshift-must-gather-wwgmc/crc-debug-72nsl" Jan 07 04:41:29 crc kubenswrapper[4980]: I0107 04:41:29.240405 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vxzm\" (UniqueName: \"kubernetes.io/projected/bc0ba207-b8b8-4eca-b229-e7d37dcf2f63-kube-api-access-6vxzm\") pod \"crc-debug-72nsl\" (UID: \"bc0ba207-b8b8-4eca-b229-e7d37dcf2f63\") " pod="openshift-must-gather-wwgmc/crc-debug-72nsl" Jan 07 04:41:29 crc kubenswrapper[4980]: I0107 04:41:29.296564 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwgmc/crc-debug-72nsl" Jan 07 04:41:29 crc kubenswrapper[4980]: I0107 04:41:29.669194 4980 generic.go:334] "Generic (PLEG): container finished" podID="bc0ba207-b8b8-4eca-b229-e7d37dcf2f63" containerID="af0dd8d556fc7b2c2d0c3e3a8b109eae59613997f6497620f2c5f4385003e7ae" exitCode=0 Jan 07 04:41:29 crc kubenswrapper[4980]: I0107 04:41:29.669449 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwgmc/crc-debug-72nsl" event={"ID":"bc0ba207-b8b8-4eca-b229-e7d37dcf2f63","Type":"ContainerDied","Data":"af0dd8d556fc7b2c2d0c3e3a8b109eae59613997f6497620f2c5f4385003e7ae"} Jan 07 04:41:29 crc kubenswrapper[4980]: I0107 04:41:29.669473 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwgmc/crc-debug-72nsl" event={"ID":"bc0ba207-b8b8-4eca-b229-e7d37dcf2f63","Type":"ContainerStarted","Data":"6783176d5ea415e5976e60ca2b634540d1d8b252df961a064f31e33b23df0337"} Jan 07 04:41:29 crc kubenswrapper[4980]: I0107 04:41:29.744406 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e97c4f2-8114-48c3-a20c-075ea7880c28" path="/var/lib/kubelet/pods/1e97c4f2-8114-48c3-a20c-075ea7880c28/volumes" Jan 07 04:41:30 crc kubenswrapper[4980]: I0107 04:41:30.044441 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wwgmc/crc-debug-72nsl"] Jan 07 04:41:30 crc kubenswrapper[4980]: I0107 04:41:30.051638 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wwgmc/crc-debug-72nsl"] Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.104659 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwgmc/crc-debug-72nsl" Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.254110 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc0ba207-b8b8-4eca-b229-e7d37dcf2f63-host\") pod \"bc0ba207-b8b8-4eca-b229-e7d37dcf2f63\" (UID: \"bc0ba207-b8b8-4eca-b229-e7d37dcf2f63\") " Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.254246 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc0ba207-b8b8-4eca-b229-e7d37dcf2f63-host" (OuterVolumeSpecName: "host") pod "bc0ba207-b8b8-4eca-b229-e7d37dcf2f63" (UID: "bc0ba207-b8b8-4eca-b229-e7d37dcf2f63"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.254524 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vxzm\" (UniqueName: \"kubernetes.io/projected/bc0ba207-b8b8-4eca-b229-e7d37dcf2f63-kube-api-access-6vxzm\") pod \"bc0ba207-b8b8-4eca-b229-e7d37dcf2f63\" (UID: \"bc0ba207-b8b8-4eca-b229-e7d37dcf2f63\") " Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.255047 4980 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc0ba207-b8b8-4eca-b229-e7d37dcf2f63-host\") on node \"crc\" DevicePath \"\"" Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.265778 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc0ba207-b8b8-4eca-b229-e7d37dcf2f63-kube-api-access-6vxzm" (OuterVolumeSpecName: "kube-api-access-6vxzm") pod "bc0ba207-b8b8-4eca-b229-e7d37dcf2f63" (UID: "bc0ba207-b8b8-4eca-b229-e7d37dcf2f63"). InnerVolumeSpecName "kube-api-access-6vxzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.290652 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wwgmc/crc-debug-kxnnw"] Jan 07 04:41:31 crc kubenswrapper[4980]: E0107 04:41:31.291102 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0ba207-b8b8-4eca-b229-e7d37dcf2f63" containerName="container-00" Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.291119 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0ba207-b8b8-4eca-b229-e7d37dcf2f63" containerName="container-00" Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.291318 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0ba207-b8b8-4eca-b229-e7d37dcf2f63" containerName="container-00" Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.291972 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwgmc/crc-debug-kxnnw" Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.356803 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glcmr\" (UniqueName: \"kubernetes.io/projected/ef4dee4f-d396-458e-b16d-ebd607413e28-kube-api-access-glcmr\") pod \"crc-debug-kxnnw\" (UID: \"ef4dee4f-d396-458e-b16d-ebd607413e28\") " pod="openshift-must-gather-wwgmc/crc-debug-kxnnw" Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.356893 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef4dee4f-d396-458e-b16d-ebd607413e28-host\") pod \"crc-debug-kxnnw\" (UID: \"ef4dee4f-d396-458e-b16d-ebd607413e28\") " pod="openshift-must-gather-wwgmc/crc-debug-kxnnw" Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.356988 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vxzm\" (UniqueName: \"kubernetes.io/projected/bc0ba207-b8b8-4eca-b229-e7d37dcf2f63-kube-api-access-6vxzm\") on node \"crc\" DevicePath \"\"" Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.458443 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glcmr\" (UniqueName: \"kubernetes.io/projected/ef4dee4f-d396-458e-b16d-ebd607413e28-kube-api-access-glcmr\") pod \"crc-debug-kxnnw\" (UID: \"ef4dee4f-d396-458e-b16d-ebd607413e28\") " pod="openshift-must-gather-wwgmc/crc-debug-kxnnw" Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.458516 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef4dee4f-d396-458e-b16d-ebd607413e28-host\") pod \"crc-debug-kxnnw\" (UID: \"ef4dee4f-d396-458e-b16d-ebd607413e28\") " pod="openshift-must-gather-wwgmc/crc-debug-kxnnw" Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.458664 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef4dee4f-d396-458e-b16d-ebd607413e28-host\") pod \"crc-debug-kxnnw\" (UID: \"ef4dee4f-d396-458e-b16d-ebd607413e28\") " pod="openshift-must-gather-wwgmc/crc-debug-kxnnw" Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.474740 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glcmr\" (UniqueName: \"kubernetes.io/projected/ef4dee4f-d396-458e-b16d-ebd607413e28-kube-api-access-glcmr\") pod \"crc-debug-kxnnw\" (UID: \"ef4dee4f-d396-458e-b16d-ebd607413e28\") " pod="openshift-must-gather-wwgmc/crc-debug-kxnnw" Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.621014 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwgmc/crc-debug-kxnnw" Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.690303 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6783176d5ea415e5976e60ca2b634540d1d8b252df961a064f31e33b23df0337" Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.690381 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwgmc/crc-debug-72nsl" Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.696742 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwgmc/crc-debug-kxnnw" event={"ID":"ef4dee4f-d396-458e-b16d-ebd607413e28","Type":"ContainerStarted","Data":"6b8c9edf6251fc9f2ea22075ce2e5f71fc3bb4b861d3b64a9f5d2c91020b2108"} Jan 07 04:41:31 crc kubenswrapper[4980]: I0107 04:41:31.745321 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc0ba207-b8b8-4eca-b229-e7d37dcf2f63" path="/var/lib/kubelet/pods/bc0ba207-b8b8-4eca-b229-e7d37dcf2f63/volumes" Jan 07 04:41:32 crc kubenswrapper[4980]: I0107 04:41:32.713682 4980 generic.go:334] "Generic (PLEG): container finished" podID="ef4dee4f-d396-458e-b16d-ebd607413e28" containerID="5df369ce6a1aa16844fd0356e32745bee39873b1611bea5f32d231b856cd84e2" exitCode=0 Jan 07 04:41:32 crc kubenswrapper[4980]: I0107 04:41:32.713757 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwgmc/crc-debug-kxnnw" event={"ID":"ef4dee4f-d396-458e-b16d-ebd607413e28","Type":"ContainerDied","Data":"5df369ce6a1aa16844fd0356e32745bee39873b1611bea5f32d231b856cd84e2"} Jan 07 04:41:32 crc kubenswrapper[4980]: I0107 04:41:32.770090 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wwgmc/crc-debug-kxnnw"] Jan 07 04:41:32 crc kubenswrapper[4980]: I0107 04:41:32.783856 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wwgmc/crc-debug-kxnnw"] Jan 07 04:41:33 crc kubenswrapper[4980]: I0107 04:41:33.945603 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwgmc/crc-debug-kxnnw" Jan 07 04:41:34 crc kubenswrapper[4980]: I0107 04:41:34.116701 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef4dee4f-d396-458e-b16d-ebd607413e28-host\") pod \"ef4dee4f-d396-458e-b16d-ebd607413e28\" (UID: \"ef4dee4f-d396-458e-b16d-ebd607413e28\") " Jan 07 04:41:34 crc kubenswrapper[4980]: I0107 04:41:34.116819 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef4dee4f-d396-458e-b16d-ebd607413e28-host" (OuterVolumeSpecName: "host") pod "ef4dee4f-d396-458e-b16d-ebd607413e28" (UID: "ef4dee4f-d396-458e-b16d-ebd607413e28"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 07 04:41:34 crc kubenswrapper[4980]: I0107 04:41:34.116915 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glcmr\" (UniqueName: \"kubernetes.io/projected/ef4dee4f-d396-458e-b16d-ebd607413e28-kube-api-access-glcmr\") pod \"ef4dee4f-d396-458e-b16d-ebd607413e28\" (UID: \"ef4dee4f-d396-458e-b16d-ebd607413e28\") " Jan 07 04:41:34 crc kubenswrapper[4980]: I0107 04:41:34.117445 4980 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef4dee4f-d396-458e-b16d-ebd607413e28-host\") on node \"crc\" DevicePath \"\"" Jan 07 04:41:34 crc kubenswrapper[4980]: I0107 04:41:34.121959 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef4dee4f-d396-458e-b16d-ebd607413e28-kube-api-access-glcmr" (OuterVolumeSpecName: "kube-api-access-glcmr") pod "ef4dee4f-d396-458e-b16d-ebd607413e28" (UID: "ef4dee4f-d396-458e-b16d-ebd607413e28"). InnerVolumeSpecName "kube-api-access-glcmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:41:34 crc kubenswrapper[4980]: I0107 04:41:34.219421 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glcmr\" (UniqueName: \"kubernetes.io/projected/ef4dee4f-d396-458e-b16d-ebd607413e28-kube-api-access-glcmr\") on node \"crc\" DevicePath \"\"" Jan 07 04:41:34 crc kubenswrapper[4980]: I0107 04:41:34.842108 4980 scope.go:117] "RemoveContainer" containerID="5df369ce6a1aa16844fd0356e32745bee39873b1611bea5f32d231b856cd84e2" Jan 07 04:41:34 crc kubenswrapper[4980]: I0107 04:41:34.842186 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwgmc/crc-debug-kxnnw" Jan 07 04:41:35 crc kubenswrapper[4980]: I0107 04:41:35.755522 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef4dee4f-d396-458e-b16d-ebd607413e28" path="/var/lib/kubelet/pods/ef4dee4f-d396-458e-b16d-ebd607413e28/volumes" Jan 07 04:41:44 crc kubenswrapper[4980]: I0107 04:41:44.624864 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4fnj8"] Jan 07 04:41:44 crc kubenswrapper[4980]: E0107 04:41:44.625922 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef4dee4f-d396-458e-b16d-ebd607413e28" containerName="container-00" Jan 07 04:41:44 crc kubenswrapper[4980]: I0107 04:41:44.625938 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef4dee4f-d396-458e-b16d-ebd607413e28" containerName="container-00" Jan 07 04:41:44 crc kubenswrapper[4980]: I0107 04:41:44.626184 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef4dee4f-d396-458e-b16d-ebd607413e28" containerName="container-00" Jan 07 04:41:44 crc kubenswrapper[4980]: I0107 04:41:44.628095 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fnj8" Jan 07 04:41:44 crc kubenswrapper[4980]: I0107 04:41:44.680687 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4fnj8"] Jan 07 04:41:44 crc kubenswrapper[4980]: I0107 04:41:44.735345 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a8acc9c-cac7-458c-b704-df411be7f7a3-utilities\") pod \"community-operators-4fnj8\" (UID: \"8a8acc9c-cac7-458c-b704-df411be7f7a3\") " pod="openshift-marketplace/community-operators-4fnj8" Jan 07 04:41:44 crc kubenswrapper[4980]: I0107 04:41:44.735662 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a8acc9c-cac7-458c-b704-df411be7f7a3-catalog-content\") pod \"community-operators-4fnj8\" (UID: \"8a8acc9c-cac7-458c-b704-df411be7f7a3\") " pod="openshift-marketplace/community-operators-4fnj8" Jan 07 04:41:44 crc kubenswrapper[4980]: I0107 04:41:44.735739 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9wk4\" (UniqueName: \"kubernetes.io/projected/8a8acc9c-cac7-458c-b704-df411be7f7a3-kube-api-access-n9wk4\") pod \"community-operators-4fnj8\" (UID: \"8a8acc9c-cac7-458c-b704-df411be7f7a3\") " pod="openshift-marketplace/community-operators-4fnj8" Jan 07 04:41:44 crc kubenswrapper[4980]: I0107 04:41:44.836997 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a8acc9c-cac7-458c-b704-df411be7f7a3-catalog-content\") pod \"community-operators-4fnj8\" (UID: \"8a8acc9c-cac7-458c-b704-df411be7f7a3\") " pod="openshift-marketplace/community-operators-4fnj8" Jan 07 04:41:44 crc kubenswrapper[4980]: I0107 04:41:44.837099 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9wk4\" (UniqueName: \"kubernetes.io/projected/8a8acc9c-cac7-458c-b704-df411be7f7a3-kube-api-access-n9wk4\") pod \"community-operators-4fnj8\" (UID: \"8a8acc9c-cac7-458c-b704-df411be7f7a3\") " pod="openshift-marketplace/community-operators-4fnj8" Jan 07 04:41:44 crc kubenswrapper[4980]: I0107 04:41:44.837190 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a8acc9c-cac7-458c-b704-df411be7f7a3-utilities\") pod \"community-operators-4fnj8\" (UID: \"8a8acc9c-cac7-458c-b704-df411be7f7a3\") " pod="openshift-marketplace/community-operators-4fnj8" Jan 07 04:41:44 crc kubenswrapper[4980]: I0107 04:41:44.838653 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a8acc9c-cac7-458c-b704-df411be7f7a3-catalog-content\") pod \"community-operators-4fnj8\" (UID: \"8a8acc9c-cac7-458c-b704-df411be7f7a3\") " pod="openshift-marketplace/community-operators-4fnj8" Jan 07 04:41:44 crc kubenswrapper[4980]: I0107 04:41:44.840817 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a8acc9c-cac7-458c-b704-df411be7f7a3-utilities\") pod \"community-operators-4fnj8\" (UID: \"8a8acc9c-cac7-458c-b704-df411be7f7a3\") " pod="openshift-marketplace/community-operators-4fnj8" Jan 07 04:41:45 crc kubenswrapper[4980]: I0107 04:41:45.170256 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9wk4\" (UniqueName: \"kubernetes.io/projected/8a8acc9c-cac7-458c-b704-df411be7f7a3-kube-api-access-n9wk4\") pod \"community-operators-4fnj8\" (UID: \"8a8acc9c-cac7-458c-b704-df411be7f7a3\") " pod="openshift-marketplace/community-operators-4fnj8" Jan 07 04:41:45 crc kubenswrapper[4980]: I0107 04:41:45.293112 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fnj8" Jan 07 04:41:45 crc kubenswrapper[4980]: I0107 04:41:45.715436 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4fnj8"] Jan 07 04:41:45 crc kubenswrapper[4980]: W0107 04:41:45.733391 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a8acc9c_cac7_458c_b704_df411be7f7a3.slice/crio-c66013fae64218adeeccb29ec1236e839e6a550214f55ab34366af63597da01b WatchSource:0}: Error finding container c66013fae64218adeeccb29ec1236e839e6a550214f55ab34366af63597da01b: Status 404 returned error can't find the container with id c66013fae64218adeeccb29ec1236e839e6a550214f55ab34366af63597da01b Jan 07 04:41:45 crc kubenswrapper[4980]: I0107 04:41:45.960947 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fnj8" event={"ID":"8a8acc9c-cac7-458c-b704-df411be7f7a3","Type":"ContainerStarted","Data":"886dc73d26c2eb0fa60969adf56a1033d90bf3c20a08b2a2888ba5c524e141f6"} Jan 07 04:41:45 crc kubenswrapper[4980]: I0107 04:41:45.961208 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fnj8" event={"ID":"8a8acc9c-cac7-458c-b704-df411be7f7a3","Type":"ContainerStarted","Data":"c66013fae64218adeeccb29ec1236e839e6a550214f55ab34366af63597da01b"} Jan 07 04:41:46 crc kubenswrapper[4980]: I0107 04:41:46.971801 4980 generic.go:334] "Generic (PLEG): container finished" podID="8a8acc9c-cac7-458c-b704-df411be7f7a3" containerID="886dc73d26c2eb0fa60969adf56a1033d90bf3c20a08b2a2888ba5c524e141f6" exitCode=0 Jan 07 04:41:46 crc kubenswrapper[4980]: I0107 04:41:46.971852 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fnj8" event={"ID":"8a8acc9c-cac7-458c-b704-df411be7f7a3","Type":"ContainerDied","Data":"886dc73d26c2eb0fa60969adf56a1033d90bf3c20a08b2a2888ba5c524e141f6"} Jan 07 04:41:47 crc kubenswrapper[4980]: I0107 04:41:47.983205 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fnj8" event={"ID":"8a8acc9c-cac7-458c-b704-df411be7f7a3","Type":"ContainerStarted","Data":"1d3b87092fca4bec91d62f1b661d2831897b0e78fe9667ffe4ad63934b810cda"} Jan 07 04:41:48 crc kubenswrapper[4980]: I0107 04:41:48.992402 4980 generic.go:334] "Generic (PLEG): container finished" podID="8a8acc9c-cac7-458c-b704-df411be7f7a3" containerID="1d3b87092fca4bec91d62f1b661d2831897b0e78fe9667ffe4ad63934b810cda" exitCode=0 Jan 07 04:41:48 crc kubenswrapper[4980]: I0107 04:41:48.992663 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fnj8" event={"ID":"8a8acc9c-cac7-458c-b704-df411be7f7a3","Type":"ContainerDied","Data":"1d3b87092fca4bec91d62f1b661d2831897b0e78fe9667ffe4ad63934b810cda"} Jan 07 04:41:50 crc kubenswrapper[4980]: I0107 04:41:50.003113 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fnj8" event={"ID":"8a8acc9c-cac7-458c-b704-df411be7f7a3","Type":"ContainerStarted","Data":"079448ee65c12ecb353921c9bf3cb777ed6b57872436c5f7933fadf0e544ffef"} Jan 07 04:41:50 crc kubenswrapper[4980]: I0107 04:41:50.051430 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4fnj8" podStartSLOduration=3.522093596 podStartE2EDuration="6.05140153s" podCreationTimestamp="2026-01-07 04:41:44 +0000 UTC" firstStartedPulling="2026-01-07 04:41:46.973649508 +0000 UTC m=+4153.539344243" lastFinishedPulling="2026-01-07 04:41:49.502957432 +0000 UTC m=+4156.068652177" observedRunningTime="2026-01-07 04:41:50.035917253 +0000 UTC m=+4156.601612028" watchObservedRunningTime="2026-01-07 04:41:50.05140153 +0000 UTC m=+4156.617096305" Jan 07 04:41:55 crc kubenswrapper[4980]: I0107 04:41:55.293343 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4fnj8" Jan 07 04:41:55 crc kubenswrapper[4980]: I0107 04:41:55.294166 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4fnj8" Jan 07 04:41:55 crc kubenswrapper[4980]: I0107 04:41:55.366502 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4fnj8" Jan 07 04:41:56 crc kubenswrapper[4980]: I0107 04:41:56.115465 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4fnj8" Jan 07 04:41:56 crc kubenswrapper[4980]: I0107 04:41:56.158146 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4fnj8"] Jan 07 04:41:58 crc kubenswrapper[4980]: I0107 04:41:58.083969 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4fnj8" podUID="8a8acc9c-cac7-458c-b704-df411be7f7a3" containerName="registry-server" containerID="cri-o://079448ee65c12ecb353921c9bf3cb777ed6b57872436c5f7933fadf0e544ffef" gracePeriod=2 Jan 07 04:41:58 crc kubenswrapper[4980]: I0107 04:41:58.570202 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fnj8" Jan 07 04:41:58 crc kubenswrapper[4980]: I0107 04:41:58.618603 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9wk4\" (UniqueName: \"kubernetes.io/projected/8a8acc9c-cac7-458c-b704-df411be7f7a3-kube-api-access-n9wk4\") pod \"8a8acc9c-cac7-458c-b704-df411be7f7a3\" (UID: \"8a8acc9c-cac7-458c-b704-df411be7f7a3\") " Jan 07 04:41:58 crc kubenswrapper[4980]: I0107 04:41:58.618675 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a8acc9c-cac7-458c-b704-df411be7f7a3-utilities\") pod \"8a8acc9c-cac7-458c-b704-df411be7f7a3\" (UID: \"8a8acc9c-cac7-458c-b704-df411be7f7a3\") " Jan 07 04:41:58 crc kubenswrapper[4980]: I0107 04:41:58.618748 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a8acc9c-cac7-458c-b704-df411be7f7a3-catalog-content\") pod \"8a8acc9c-cac7-458c-b704-df411be7f7a3\" (UID: \"8a8acc9c-cac7-458c-b704-df411be7f7a3\") " Jan 07 04:41:58 crc kubenswrapper[4980]: I0107 04:41:58.619882 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a8acc9c-cac7-458c-b704-df411be7f7a3-utilities" (OuterVolumeSpecName: "utilities") pod "8a8acc9c-cac7-458c-b704-df411be7f7a3" (UID: "8a8acc9c-cac7-458c-b704-df411be7f7a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:41:58 crc kubenswrapper[4980]: I0107 04:41:58.633763 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a8acc9c-cac7-458c-b704-df411be7f7a3-kube-api-access-n9wk4" (OuterVolumeSpecName: "kube-api-access-n9wk4") pod "8a8acc9c-cac7-458c-b704-df411be7f7a3" (UID: "8a8acc9c-cac7-458c-b704-df411be7f7a3"). InnerVolumeSpecName "kube-api-access-n9wk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:41:58 crc kubenswrapper[4980]: I0107 04:41:58.668574 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a8acc9c-cac7-458c-b704-df411be7f7a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a8acc9c-cac7-458c-b704-df411be7f7a3" (UID: "8a8acc9c-cac7-458c-b704-df411be7f7a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:41:58 crc kubenswrapper[4980]: I0107 04:41:58.721219 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a8acc9c-cac7-458c-b704-df411be7f7a3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 04:41:58 crc kubenswrapper[4980]: I0107 04:41:58.721250 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9wk4\" (UniqueName: \"kubernetes.io/projected/8a8acc9c-cac7-458c-b704-df411be7f7a3-kube-api-access-n9wk4\") on node \"crc\" DevicePath \"\"" Jan 07 04:41:58 crc kubenswrapper[4980]: I0107 04:41:58.721271 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a8acc9c-cac7-458c-b704-df411be7f7a3-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 04:41:59 crc kubenswrapper[4980]: I0107 04:41:59.092861 4980 generic.go:334] "Generic (PLEG): container finished" podID="8a8acc9c-cac7-458c-b704-df411be7f7a3" containerID="079448ee65c12ecb353921c9bf3cb777ed6b57872436c5f7933fadf0e544ffef" exitCode=0 Jan 07 04:41:59 crc kubenswrapper[4980]: I0107 04:41:59.092910 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fnj8" event={"ID":"8a8acc9c-cac7-458c-b704-df411be7f7a3","Type":"ContainerDied","Data":"079448ee65c12ecb353921c9bf3cb777ed6b57872436c5f7933fadf0e544ffef"} Jan 07 04:41:59 crc kubenswrapper[4980]: I0107 04:41:59.092942 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fnj8" event={"ID":"8a8acc9c-cac7-458c-b704-df411be7f7a3","Type":"ContainerDied","Data":"c66013fae64218adeeccb29ec1236e839e6a550214f55ab34366af63597da01b"} Jan 07 04:41:59 crc kubenswrapper[4980]: I0107 04:41:59.092962 4980 scope.go:117] "RemoveContainer" containerID="079448ee65c12ecb353921c9bf3cb777ed6b57872436c5f7933fadf0e544ffef" Jan 07 04:41:59 crc kubenswrapper[4980]: I0107 04:41:59.093106 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fnj8" Jan 07 04:41:59 crc kubenswrapper[4980]: I0107 04:41:59.123442 4980 scope.go:117] "RemoveContainer" containerID="1d3b87092fca4bec91d62f1b661d2831897b0e78fe9667ffe4ad63934b810cda" Jan 07 04:41:59 crc kubenswrapper[4980]: I0107 04:41:59.123628 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4fnj8"] Jan 07 04:41:59 crc kubenswrapper[4980]: I0107 04:41:59.130788 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4fnj8"] Jan 07 04:41:59 crc kubenswrapper[4980]: I0107 04:41:59.143856 4980 scope.go:117] "RemoveContainer" containerID="886dc73d26c2eb0fa60969adf56a1033d90bf3c20a08b2a2888ba5c524e141f6" Jan 07 04:41:59 crc kubenswrapper[4980]: I0107 04:41:59.191481 4980 scope.go:117] "RemoveContainer" containerID="079448ee65c12ecb353921c9bf3cb777ed6b57872436c5f7933fadf0e544ffef" Jan 07 04:41:59 crc kubenswrapper[4980]: E0107 04:41:59.192085 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079448ee65c12ecb353921c9bf3cb777ed6b57872436c5f7933fadf0e544ffef\": container with ID starting with 079448ee65c12ecb353921c9bf3cb777ed6b57872436c5f7933fadf0e544ffef not found: ID does not exist" containerID="079448ee65c12ecb353921c9bf3cb777ed6b57872436c5f7933fadf0e544ffef" Jan 07 04:41:59 crc kubenswrapper[4980]: I0107 04:41:59.192123 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079448ee65c12ecb353921c9bf3cb777ed6b57872436c5f7933fadf0e544ffef"} err="failed to get container status \"079448ee65c12ecb353921c9bf3cb777ed6b57872436c5f7933fadf0e544ffef\": rpc error: code = NotFound desc = could not find container \"079448ee65c12ecb353921c9bf3cb777ed6b57872436c5f7933fadf0e544ffef\": container with ID starting with 079448ee65c12ecb353921c9bf3cb777ed6b57872436c5f7933fadf0e544ffef not found: ID does not exist" Jan 07 04:41:59 crc kubenswrapper[4980]: I0107 04:41:59.192148 4980 scope.go:117] "RemoveContainer" containerID="1d3b87092fca4bec91d62f1b661d2831897b0e78fe9667ffe4ad63934b810cda" Jan 07 04:41:59 crc kubenswrapper[4980]: E0107 04:41:59.192533 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d3b87092fca4bec91d62f1b661d2831897b0e78fe9667ffe4ad63934b810cda\": container with ID starting with 1d3b87092fca4bec91d62f1b661d2831897b0e78fe9667ffe4ad63934b810cda not found: ID does not exist" containerID="1d3b87092fca4bec91d62f1b661d2831897b0e78fe9667ffe4ad63934b810cda" Jan 07 04:41:59 crc kubenswrapper[4980]: I0107 04:41:59.192652 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d3b87092fca4bec91d62f1b661d2831897b0e78fe9667ffe4ad63934b810cda"} err="failed to get container status \"1d3b87092fca4bec91d62f1b661d2831897b0e78fe9667ffe4ad63934b810cda\": rpc error: code = NotFound desc = could not find container \"1d3b87092fca4bec91d62f1b661d2831897b0e78fe9667ffe4ad63934b810cda\": container with ID starting with 1d3b87092fca4bec91d62f1b661d2831897b0e78fe9667ffe4ad63934b810cda not found: ID does not exist" Jan 07 04:41:59 crc kubenswrapper[4980]: I0107 04:41:59.192685 4980 scope.go:117] "RemoveContainer" containerID="886dc73d26c2eb0fa60969adf56a1033d90bf3c20a08b2a2888ba5c524e141f6" Jan 07 04:41:59 crc kubenswrapper[4980]: E0107 04:41:59.193011 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"886dc73d26c2eb0fa60969adf56a1033d90bf3c20a08b2a2888ba5c524e141f6\": container with ID starting with 886dc73d26c2eb0fa60969adf56a1033d90bf3c20a08b2a2888ba5c524e141f6 not found: ID does not exist" containerID="886dc73d26c2eb0fa60969adf56a1033d90bf3c20a08b2a2888ba5c524e141f6" Jan 07 04:41:59 crc kubenswrapper[4980]: I0107 04:41:59.193048 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"886dc73d26c2eb0fa60969adf56a1033d90bf3c20a08b2a2888ba5c524e141f6"} err="failed to get container status \"886dc73d26c2eb0fa60969adf56a1033d90bf3c20a08b2a2888ba5c524e141f6\": rpc error: code = NotFound desc = could not find container \"886dc73d26c2eb0fa60969adf56a1033d90bf3c20a08b2a2888ba5c524e141f6\": container with ID starting with 886dc73d26c2eb0fa60969adf56a1033d90bf3c20a08b2a2888ba5c524e141f6 not found: ID does not exist" Jan 07 04:41:59 crc kubenswrapper[4980]: I0107 04:41:59.745991 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a8acc9c-cac7-458c-b704-df411be7f7a3" path="/var/lib/kubelet/pods/8a8acc9c-cac7-458c-b704-df411be7f7a3/volumes" Jan 07 04:42:08 crc kubenswrapper[4980]: I0107 04:42:08.552464 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f67c95874-pm99w_bc960852-4c05-4805-8251-8336bb022087/barbican-api/0.log" Jan 07 04:42:08 crc kubenswrapper[4980]: I0107 04:42:08.609303 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f67c95874-pm99w_bc960852-4c05-4805-8251-8336bb022087/barbican-api-log/0.log" Jan 07 04:42:08 crc kubenswrapper[4980]: I0107 04:42:08.749305 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-78f9866cd4-xmbrp_eb08ed0c-20e9-44f7-9472-9d1899a51d32/barbican-keystone-listener/0.log" Jan 07 04:42:08 crc kubenswrapper[4980]: I0107 04:42:08.824788 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-78f9866cd4-xmbrp_eb08ed0c-20e9-44f7-9472-9d1899a51d32/barbican-keystone-listener-log/0.log" Jan 07 04:42:08 crc kubenswrapper[4980]: I0107 04:42:08.859822 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-558f69c5-b5wnd_2d2661ce-3148-48ac-a1b2-af154d207c5a/barbican-worker/0.log" Jan 07 04:42:09 crc kubenswrapper[4980]: I0107 04:42:09.002679 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-558f69c5-b5wnd_2d2661ce-3148-48ac-a1b2-af154d207c5a/barbican-worker-log/0.log" Jan 07 04:42:09 crc kubenswrapper[4980]: I0107 04:42:09.014877 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-j8mdf_b4dce042-2c6f-4a74-bbb3-84a79cfb02a1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:42:09 crc kubenswrapper[4980]: I0107 04:42:09.188438 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8/ceilometer-notification-agent/0.log" Jan 07 04:42:09 crc kubenswrapper[4980]: I0107 04:42:09.216768 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8/ceilometer-central-agent/0.log" Jan 07 04:42:09 crc kubenswrapper[4980]: I0107 04:42:09.246987 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8/proxy-httpd/0.log" Jan 07 04:42:09 crc kubenswrapper[4980]: I0107 04:42:09.296744 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7aa05cd9-6f30-4fbe-a2b2-fb527752dcf8/sg-core/0.log" Jan 07 04:42:09 crc kubenswrapper[4980]: I0107 04:42:09.414165 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a441a7ef-1973-4f21-8ec1-834904f5bcf7/cinder-api-log/0.log" Jan 07 04:42:09 crc kubenswrapper[4980]: I0107 04:42:09.432769 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a441a7ef-1973-4f21-8ec1-834904f5bcf7/cinder-api/0.log" Jan 07 04:42:09 crc kubenswrapper[4980]: I0107 04:42:09.612848 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e880ebfc-1037-4101-b489-84fc6660d45f/probe/0.log" Jan 07 04:42:09 crc kubenswrapper[4980]: I0107 04:42:09.688355 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e880ebfc-1037-4101-b489-84fc6660d45f/cinder-scheduler/0.log" Jan 07 04:42:09 crc kubenswrapper[4980]: I0107 04:42:09.722236 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-7pljr_67224118-a228-4d50-a70e-1d675bd7df2e/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:42:09 crc kubenswrapper[4980]: I0107 04:42:09.879491 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-xmxdg_553631f5-8b26-4a24-bc27-cdbf1ad869db/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:42:09 crc kubenswrapper[4980]: I0107 04:42:09.922755 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-n8zmd_2ac37f68-f3d9-42eb-a68c-d2526b730663/init/0.log" Jan 07 04:42:10 crc kubenswrapper[4980]: I0107 04:42:10.080934 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-n8zmd_2ac37f68-f3d9-42eb-a68c-d2526b730663/init/0.log" Jan 07 04:42:10 crc kubenswrapper[4980]: I0107 04:42:10.130813 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-n8zmd_2ac37f68-f3d9-42eb-a68c-d2526b730663/dnsmasq-dns/0.log" Jan 07 04:42:10 crc kubenswrapper[4980]: I0107 04:42:10.139328 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-mrqvp_111ee99f-4f5d-4647-9ee9-33addfaad13e/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:42:10 crc kubenswrapper[4980]: I0107 04:42:10.350089 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f122c82e-51a4-4b1c-8457-02b12f045c52/glance-httpd/0.log" Jan 07 04:42:10 crc kubenswrapper[4980]: I0107 04:42:10.350193 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f122c82e-51a4-4b1c-8457-02b12f045c52/glance-log/0.log" Jan 07 04:42:10 crc kubenswrapper[4980]: I0107 04:42:10.508702 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0/glance-httpd/0.log" Jan 07 04:42:10 crc kubenswrapper[4980]: I0107 04:42:10.583515 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_8ef8bb9f-4c39-47a0-b6d8-6a20655d42a0/glance-log/0.log" Jan 07 04:42:10 crc kubenswrapper[4980]: I0107 04:42:10.676538 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-565d4f6c4b-gj6mz_5d0304bc-69af-4a65-90e0-088a428990a1/horizon/0.log" Jan 07 04:42:10 crc kubenswrapper[4980]: I0107 04:42:10.915703 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-rks2b_010cdc43-6f59-4a62-b7ae-b98c5cdec4e4/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:42:11 crc kubenswrapper[4980]: I0107 04:42:11.064879 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-565d4f6c4b-gj6mz_5d0304bc-69af-4a65-90e0-088a428990a1/horizon-log/0.log" Jan 07 04:42:11 crc kubenswrapper[4980]: I0107 04:42:11.158762 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-r96jt_accf2eeb-147d-49a3-8aa3-06d9e52a2fb4/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:42:11 crc kubenswrapper[4980]: I0107 04:42:11.309274 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-85979fc5c6-rh7l6_ede07643-4b02-490a-a73d-e6c783a138e6/keystone-api/0.log" Jan 07 04:42:11 crc kubenswrapper[4980]: I0107 04:42:11.383841 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29462641-qspbr_f0d0d398-f0fc-4cec-abb7-7c5eca5254cd/keystone-cron/0.log" Jan 07 04:42:11 crc kubenswrapper[4980]: I0107 04:42:11.478045 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_96c1a5f6-5439-4aa4-a1c0-27408fbbe977/kube-state-metrics/0.log" Jan 07 04:42:11 crc kubenswrapper[4980]: I0107 04:42:11.640342 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-bngfg_952aa7ac-68e0-4f49-bd80-407e2181fa05/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:42:11 crc kubenswrapper[4980]: I0107 04:42:11.983092 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5ffd847cb9-kf6tb_6369f2cd-b133-42d0-bac5-f4790bf08ae5/neutron-httpd/0.log" Jan 07 04:42:12 crc kubenswrapper[4980]: I0107 04:42:12.039399 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5ffd847cb9-kf6tb_6369f2cd-b133-42d0-bac5-f4790bf08ae5/neutron-api/0.log" Jan 07 04:42:12 crc kubenswrapper[4980]: I0107 04:42:12.098615 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-66jxl_f588fdb7-1285-44cd-bf64-9b1681863e15/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:42:12 crc kubenswrapper[4980]: I0107 04:42:12.587734 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_111ec39e-2b02-4d0d-89cf-9484a6399fd7/nova-api-log/0.log" Jan 07 04:42:12 crc kubenswrapper[4980]: I0107 04:42:12.701066 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_86f2272b-45b2-490c-a64e-f4367491036b/nova-cell0-conductor-conductor/0.log" Jan 07 04:42:12 crc kubenswrapper[4980]: I0107 04:42:12.978634 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c840061e-cf97-4c53-b581-805806d7343c/nova-cell1-conductor-conductor/0.log" Jan 07 04:42:13 crc kubenswrapper[4980]: I0107 04:42:13.116527 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_111ec39e-2b02-4d0d-89cf-9484a6399fd7/nova-api-api/0.log" Jan 07 04:42:13 crc kubenswrapper[4980]: I0107 04:42:13.141976 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0fca998b-28f9-4611-99f7-2cb9f2cb8042/nova-cell1-novncproxy-novncproxy/0.log" Jan 07 04:42:13 crc kubenswrapper[4980]: I0107 04:42:13.226517 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-w2gvj_b5b0b45b-e9f1-40fe-99ef-4ef52da4eb7e/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:42:13 crc kubenswrapper[4980]: I0107 04:42:13.359948 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c7266195-f7f5-40e2-9c60-97a0d6684272/nova-metadata-log/0.log" Jan 07 04:42:13 crc kubenswrapper[4980]: I0107 04:42:13.879737 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3336b2a3-f175-44d1-9771-adabe71eea6c/mysql-bootstrap/0.log" Jan 07 04:42:13 crc kubenswrapper[4980]: I0107 04:42:13.949676 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_62709b59-f907-4b1f-b0a4-bab71ce12d86/nova-scheduler-scheduler/0.log" Jan 07 04:42:14 crc kubenswrapper[4980]: I0107 04:42:14.029217 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3336b2a3-f175-44d1-9771-adabe71eea6c/mysql-bootstrap/0.log" Jan 07 04:42:14 crc kubenswrapper[4980]: I0107 04:42:14.114062 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3336b2a3-f175-44d1-9771-adabe71eea6c/galera/0.log" Jan 07 04:42:14 crc kubenswrapper[4980]: I0107 04:42:14.210353 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2/mysql-bootstrap/0.log" Jan 07 04:42:14 crc kubenswrapper[4980]: I0107 04:42:14.405035 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2/mysql-bootstrap/0.log" Jan 07 04:42:14 crc kubenswrapper[4980]: I0107 04:42:14.454185 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c4699a3c-f9f1-4c80-93fb-dcb3b9e852b2/galera/0.log" Jan 07 04:42:14 crc kubenswrapper[4980]: I0107 04:42:14.570087 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_ec7c2df8-5955-4063-831a-7d1371e5e983/openstackclient/0.log" Jan 07 04:42:14 crc kubenswrapper[4980]: I0107 04:42:14.662705 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-94rwj_4567269f-c5aa-44a8-8e68-c0dc01c2b55c/ovn-controller/0.log" Jan 07 04:42:14 crc kubenswrapper[4980]: I0107 04:42:14.734608 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c7266195-f7f5-40e2-9c60-97a0d6684272/nova-metadata-metadata/0.log" Jan 07 04:42:14 crc kubenswrapper[4980]: I0107 04:42:14.888816 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hjmmm_b8f6a4d2-652b-4f9a-ad2e-b974c9062112/openstack-network-exporter/0.log" Jan 07 04:42:14 crc kubenswrapper[4980]: I0107 04:42:14.947886 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4nfg5_2b80c3b0-701f-4616-b851-c954a9421bf6/ovsdb-server-init/0.log" Jan 07 04:42:15 crc kubenswrapper[4980]: I0107 04:42:15.191215 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4nfg5_2b80c3b0-701f-4616-b851-c954a9421bf6/ovsdb-server/0.log" Jan 07 04:42:15 crc kubenswrapper[4980]: I0107 04:42:15.217120 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4nfg5_2b80c3b0-701f-4616-b851-c954a9421bf6/ovsdb-server-init/0.log" Jan 07 04:42:15 crc kubenswrapper[4980]: I0107 04:42:15.271521 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4nfg5_2b80c3b0-701f-4616-b851-c954a9421bf6/ovs-vswitchd/0.log" Jan 07 04:42:15 crc kubenswrapper[4980]: I0107 04:42:15.425716 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-w9lbc_b6c5efe0-317c-4de6-9d52-c8790db72ae6/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:42:15 crc kubenswrapper[4980]: I0107 04:42:15.489864 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b5687f55-2760-4b17-949f-7a691768ba40/openstack-network-exporter/0.log" Jan 07 04:42:15 crc kubenswrapper[4980]: I0107 04:42:15.541276 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b5687f55-2760-4b17-949f-7a691768ba40/ovn-northd/0.log" Jan 07 04:42:15 crc kubenswrapper[4980]: I0107 04:42:15.637001 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bc84f69f-9bab-40e5-80a8-75266ef8f4b7/openstack-network-exporter/0.log" Jan 07 04:42:15 crc kubenswrapper[4980]: I0107 04:42:15.722995 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bc84f69f-9bab-40e5-80a8-75266ef8f4b7/ovsdbserver-nb/0.log" Jan 07 04:42:15 crc kubenswrapper[4980]: I0107 04:42:15.833048 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_808ebed8-cef0-4938-9ad2-64f28d9c8af2/openstack-network-exporter/0.log" Jan 07 04:42:16 crc kubenswrapper[4980]: I0107 04:42:16.500681 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_808ebed8-cef0-4938-9ad2-64f28d9c8af2/ovsdbserver-sb/0.log" Jan 07 04:42:16 crc kubenswrapper[4980]: I0107 04:42:16.519067 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7f54697b86-x8z4v_0d68262c-96ba-42af-8b46-f13aa424ba0d/placement-api/0.log" Jan 07 04:42:16 crc kubenswrapper[4980]: I0107 04:42:16.603133 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7f54697b86-x8z4v_0d68262c-96ba-42af-8b46-f13aa424ba0d/placement-log/0.log" Jan 07 04:42:16 crc kubenswrapper[4980]: I0107 04:42:16.717903 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_af77d785-4fe8-4d72-a393-a7da215c4c55/setup-container/0.log" Jan 07 04:42:16 crc kubenswrapper[4980]: I0107 04:42:16.907019 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_af77d785-4fe8-4d72-a393-a7da215c4c55/setup-container/0.log" Jan 07 04:42:16 crc kubenswrapper[4980]: I0107 04:42:16.921383 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_af77d785-4fe8-4d72-a393-a7da215c4c55/rabbitmq/0.log" Jan 07 04:42:17 crc kubenswrapper[4980]: I0107 04:42:17.040361 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_837d407a-b0ff-4fec-8c21-e30b95cd3d7b/setup-container/0.log" Jan 07 04:42:17 crc kubenswrapper[4980]: I0107 04:42:17.182132 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_837d407a-b0ff-4fec-8c21-e30b95cd3d7b/setup-container/0.log" Jan 07 04:42:17 crc kubenswrapper[4980]: I0107 04:42:17.216146 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_837d407a-b0ff-4fec-8c21-e30b95cd3d7b/rabbitmq/0.log" Jan 07 04:42:17 crc kubenswrapper[4980]: I0107 04:42:17.300186 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2kbcv_5319b8be-e13c-4d5f-92d5-41d82748a080/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:42:17 crc kubenswrapper[4980]: I0107 04:42:17.443209 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-phv2v_3b42fafe-35e4-45a7-b3c9-95d8b9caa607/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:42:17 crc kubenswrapper[4980]: I0107 04:42:17.548889 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-8cgvs_6c12fa92-7a85-42c7-90f2-3b837c2067f8/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:42:18 crc kubenswrapper[4980]: I0107 04:42:18.161971 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-tv787_841894d0-7f26-4642-ac09-1395082e288e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:42:18 crc kubenswrapper[4980]: I0107 04:42:18.216339 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-q252v_657c1546-50a5-49f6-9db2-a85ade05e059/ssh-known-hosts-edpm-deployment/0.log" Jan 07 04:42:18 crc kubenswrapper[4980]: I0107 04:42:18.398342 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85968999bf-kv4dj_df0744c9-9130-4abb-be49-156d72cc1a20/proxy-server/0.log" Jan 07 04:42:18 crc kubenswrapper[4980]: I0107 04:42:18.502250 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-89vfr_a3566d37-de40-4834-9bbc-48dc6fe7e9c5/swift-ring-rebalance/0.log" Jan 07 04:42:18 crc kubenswrapper[4980]: I0107 04:42:18.532075 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85968999bf-kv4dj_df0744c9-9130-4abb-be49-156d72cc1a20/proxy-httpd/0.log" Jan 07 04:42:18 crc kubenswrapper[4980]: I0107 04:42:18.729172 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/account-reaper/0.log" Jan 07 04:42:18 crc kubenswrapper[4980]: I0107 04:42:18.732575 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/account-auditor/0.log" Jan 07 04:42:18 crc kubenswrapper[4980]: I0107 04:42:18.881713 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/account-replicator/0.log" Jan 07 04:42:18 crc kubenswrapper[4980]: I0107 04:42:18.890935 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/account-server/0.log" Jan 07 04:42:18 crc kubenswrapper[4980]: I0107 04:42:18.930741 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/container-auditor/0.log" Jan 07 04:42:18 crc kubenswrapper[4980]: I0107 04:42:18.950198 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/container-replicator/0.log" Jan 07 04:42:19 crc kubenswrapper[4980]: I0107 04:42:19.125887 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/container-updater/0.log" Jan 07 04:42:19 crc kubenswrapper[4980]: I0107 04:42:19.136928 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/container-server/0.log" Jan 07 04:42:19 crc kubenswrapper[4980]: I0107 04:42:19.155443 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/object-auditor/0.log" Jan 07 04:42:19 crc kubenswrapper[4980]: I0107 04:42:19.162939 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/object-expirer/0.log" Jan 07 04:42:19 crc kubenswrapper[4980]: I0107 04:42:19.323578 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/object-replicator/0.log" Jan 07 04:42:19 crc kubenswrapper[4980]: I0107 04:42:19.365879 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/rsync/0.log" Jan 07 04:42:19 crc kubenswrapper[4980]: I0107 04:42:19.390539 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/object-server/0.log" Jan 07 04:42:19 crc kubenswrapper[4980]: I0107 04:42:19.407271 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/object-updater/0.log" Jan 07 04:42:19 crc kubenswrapper[4980]: I0107 04:42:19.607601 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6b5878bb-8928-4957-a27d-ce18da212460/swift-recon-cron/0.log" Jan 07 04:42:19 crc kubenswrapper[4980]: I0107 04:42:19.656846 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-jz6pb_7cd7afa6-8208-47e3-b598-0f2e8578dc3f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:42:19 crc kubenswrapper[4980]: I0107 04:42:19.858177 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_4d0a99f6-8fa8-40a7-b994-16a2e287c6ee/tempest-tests-tempest-tests-runner/0.log" Jan 07 04:42:19 crc kubenswrapper[4980]: I0107 04:42:19.891113 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_d8eaae2d-e134-40ee-b33a-51a04571798a/test-operator-logs-container/0.log" Jan 07 04:42:20 crc kubenswrapper[4980]: I0107 04:42:20.044175 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5c8vs_13eceb60-89ef-4f65-9639-7295976d7c72/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 07 04:42:30 crc kubenswrapper[4980]: I0107 04:42:30.488300 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_cf13ed1a-99f7-4574-a18a-7e559c48ddaa/memcached/0.log" Jan 07 04:42:33 crc kubenswrapper[4980]: I0107 04:42:33.226195 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7n7hc"] Jan 07 04:42:33 crc kubenswrapper[4980]: E0107 04:42:33.226908 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a8acc9c-cac7-458c-b704-df411be7f7a3" containerName="extract-content" Jan 07 04:42:33 crc kubenswrapper[4980]: I0107 04:42:33.226921 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a8acc9c-cac7-458c-b704-df411be7f7a3" containerName="extract-content" Jan 07 04:42:33 crc kubenswrapper[4980]: E0107 04:42:33.226935 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a8acc9c-cac7-458c-b704-df411be7f7a3" containerName="extract-utilities" Jan 07 04:42:33 crc kubenswrapper[4980]: I0107 04:42:33.226941 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a8acc9c-cac7-458c-b704-df411be7f7a3" containerName="extract-utilities" Jan 07 04:42:33 crc kubenswrapper[4980]: E0107 04:42:33.226970 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a8acc9c-cac7-458c-b704-df411be7f7a3" containerName="registry-server" Jan 07 04:42:33 crc kubenswrapper[4980]: I0107 04:42:33.226977 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a8acc9c-cac7-458c-b704-df411be7f7a3" containerName="registry-server" Jan 07 04:42:33 crc kubenswrapper[4980]: I0107 04:42:33.227147 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a8acc9c-cac7-458c-b704-df411be7f7a3" containerName="registry-server" Jan 07 04:42:33 crc kubenswrapper[4980]: I0107 04:42:33.228452 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7n7hc" Jan 07 04:42:33 crc kubenswrapper[4980]: I0107 04:42:33.236033 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7n7hc"] Jan 07 04:42:33 crc kubenswrapper[4980]: I0107 04:42:33.309037 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6072af0-58b4-4190-8a7f-45f3d3ccb513-utilities\") pod \"certified-operators-7n7hc\" (UID: \"c6072af0-58b4-4190-8a7f-45f3d3ccb513\") " pod="openshift-marketplace/certified-operators-7n7hc" Jan 07 04:42:33 crc kubenswrapper[4980]: I0107 04:42:33.309146 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6072af0-58b4-4190-8a7f-45f3d3ccb513-catalog-content\") pod \"certified-operators-7n7hc\" (UID: \"c6072af0-58b4-4190-8a7f-45f3d3ccb513\") " pod="openshift-marketplace/certified-operators-7n7hc" Jan 07 04:42:33 crc kubenswrapper[4980]: I0107 04:42:33.309196 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77h7b\" (UniqueName: \"kubernetes.io/projected/c6072af0-58b4-4190-8a7f-45f3d3ccb513-kube-api-access-77h7b\") pod \"certified-operators-7n7hc\" (UID: \"c6072af0-58b4-4190-8a7f-45f3d3ccb513\") " pod="openshift-marketplace/certified-operators-7n7hc" Jan 07 04:42:33 crc kubenswrapper[4980]: I0107 04:42:33.410251 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6072af0-58b4-4190-8a7f-45f3d3ccb513-utilities\") pod \"certified-operators-7n7hc\" (UID: \"c6072af0-58b4-4190-8a7f-45f3d3ccb513\") " pod="openshift-marketplace/certified-operators-7n7hc" Jan 07 04:42:33 crc kubenswrapper[4980]: I0107 04:42:33.410359 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6072af0-58b4-4190-8a7f-45f3d3ccb513-catalog-content\") pod \"certified-operators-7n7hc\" (UID: \"c6072af0-58b4-4190-8a7f-45f3d3ccb513\") " pod="openshift-marketplace/certified-operators-7n7hc" Jan 07 04:42:33 crc kubenswrapper[4980]: I0107 04:42:33.410412 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77h7b\" (UniqueName: \"kubernetes.io/projected/c6072af0-58b4-4190-8a7f-45f3d3ccb513-kube-api-access-77h7b\") pod \"certified-operators-7n7hc\" (UID: \"c6072af0-58b4-4190-8a7f-45f3d3ccb513\") " pod="openshift-marketplace/certified-operators-7n7hc" Jan 07 04:42:33 crc kubenswrapper[4980]: I0107 04:42:33.411090 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6072af0-58b4-4190-8a7f-45f3d3ccb513-utilities\") pod \"certified-operators-7n7hc\" (UID: \"c6072af0-58b4-4190-8a7f-45f3d3ccb513\") " pod="openshift-marketplace/certified-operators-7n7hc" Jan 07 04:42:33 crc kubenswrapper[4980]: I0107 04:42:33.411399 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6072af0-58b4-4190-8a7f-45f3d3ccb513-catalog-content\") pod \"certified-operators-7n7hc\" (UID: \"c6072af0-58b4-4190-8a7f-45f3d3ccb513\") " pod="openshift-marketplace/certified-operators-7n7hc" Jan 07 04:42:33 crc kubenswrapper[4980]: I0107 04:42:33.435233 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77h7b\" (UniqueName: \"kubernetes.io/projected/c6072af0-58b4-4190-8a7f-45f3d3ccb513-kube-api-access-77h7b\") pod \"certified-operators-7n7hc\" (UID: \"c6072af0-58b4-4190-8a7f-45f3d3ccb513\") " pod="openshift-marketplace/certified-operators-7n7hc" Jan 07 04:42:33 crc kubenswrapper[4980]: I0107 04:42:33.544451 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7n7hc" Jan 07 04:42:33 crc kubenswrapper[4980]: I0107 04:42:33.938875 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7n7hc"] Jan 07 04:42:34 crc kubenswrapper[4980]: I0107 04:42:34.399124 4980 generic.go:334] "Generic (PLEG): container finished" podID="c6072af0-58b4-4190-8a7f-45f3d3ccb513" containerID="6e43abcc078204ded962f094c43a92679cc0514acd6f2a4a60b963d3fc2d8735" exitCode=0 Jan 07 04:42:34 crc kubenswrapper[4980]: I0107 04:42:34.399168 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7n7hc" event={"ID":"c6072af0-58b4-4190-8a7f-45f3d3ccb513","Type":"ContainerDied","Data":"6e43abcc078204ded962f094c43a92679cc0514acd6f2a4a60b963d3fc2d8735"} Jan 07 04:42:34 crc kubenswrapper[4980]: I0107 04:42:34.399195 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7n7hc" event={"ID":"c6072af0-58b4-4190-8a7f-45f3d3ccb513","Type":"ContainerStarted","Data":"5b441ab85d2486f3557193a4e95aaaa7a5fb685957fb63836af90f66b9ebcb7b"} Jan 07 04:42:35 crc kubenswrapper[4980]: I0107 04:42:35.409962 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7n7hc" event={"ID":"c6072af0-58b4-4190-8a7f-45f3d3ccb513","Type":"ContainerStarted","Data":"9ae638845310b3c3f04442895340b7f9393183407a845eeb4892ab064588e44c"} Jan 07 04:42:36 crc kubenswrapper[4980]: I0107 04:42:36.421173 4980 generic.go:334] "Generic (PLEG): container finished" podID="c6072af0-58b4-4190-8a7f-45f3d3ccb513" containerID="9ae638845310b3c3f04442895340b7f9393183407a845eeb4892ab064588e44c" exitCode=0 Jan 07 04:42:36 crc kubenswrapper[4980]: I0107 04:42:36.421295 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7n7hc" event={"ID":"c6072af0-58b4-4190-8a7f-45f3d3ccb513","Type":"ContainerDied","Data":"9ae638845310b3c3f04442895340b7f9393183407a845eeb4892ab064588e44c"} Jan 07 04:42:37 crc kubenswrapper[4980]: I0107 04:42:37.431727 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7n7hc" event={"ID":"c6072af0-58b4-4190-8a7f-45f3d3ccb513","Type":"ContainerStarted","Data":"c2ff53440418996488e0606fafbd8e75cc261f24f589a099733d790f78fdca21"} Jan 07 04:42:37 crc kubenswrapper[4980]: I0107 04:42:37.449260 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7n7hc" podStartSLOduration=1.755935606 podStartE2EDuration="4.44923735s" podCreationTimestamp="2026-01-07 04:42:33 +0000 UTC" firstStartedPulling="2026-01-07 04:42:34.40078065 +0000 UTC m=+4200.966475385" lastFinishedPulling="2026-01-07 04:42:37.094082354 +0000 UTC m=+4203.659777129" observedRunningTime="2026-01-07 04:42:37.44791483 +0000 UTC m=+4204.013609585" watchObservedRunningTime="2026-01-07 04:42:37.44923735 +0000 UTC m=+4204.014932085" Jan 07 04:42:43 crc kubenswrapper[4980]: I0107 04:42:43.545870 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7n7hc" Jan 07 04:42:43 crc kubenswrapper[4980]: I0107 04:42:43.546623 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7n7hc" Jan 07 04:42:43 crc kubenswrapper[4980]: I0107 04:42:43.628795 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7n7hc" Jan 07 04:42:43 crc kubenswrapper[4980]: I0107 04:42:43.832602 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7n7hc" Jan 07 04:42:43 crc kubenswrapper[4980]: I0107 04:42:43.886609 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7n7hc"] Jan 07 04:42:45 crc kubenswrapper[4980]: I0107 04:42:45.785775 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7n7hc" podUID="c6072af0-58b4-4190-8a7f-45f3d3ccb513" containerName="registry-server" containerID="cri-o://c2ff53440418996488e0606fafbd8e75cc261f24f589a099733d790f78fdca21" gracePeriod=2 Jan 07 04:42:46 crc kubenswrapper[4980]: I0107 04:42:46.798006 4980 generic.go:334] "Generic (PLEG): container finished" podID="c6072af0-58b4-4190-8a7f-45f3d3ccb513" containerID="c2ff53440418996488e0606fafbd8e75cc261f24f589a099733d790f78fdca21" exitCode=0 Jan 07 04:42:46 crc kubenswrapper[4980]: I0107 04:42:46.798105 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7n7hc" event={"ID":"c6072af0-58b4-4190-8a7f-45f3d3ccb513","Type":"ContainerDied","Data":"c2ff53440418996488e0606fafbd8e75cc261f24f589a099733d790f78fdca21"} Jan 07 04:42:47 crc kubenswrapper[4980]: I0107 04:42:47.431286 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7n7hc" Jan 07 04:42:47 crc kubenswrapper[4980]: I0107 04:42:47.582665 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6072af0-58b4-4190-8a7f-45f3d3ccb513-utilities\") pod \"c6072af0-58b4-4190-8a7f-45f3d3ccb513\" (UID: \"c6072af0-58b4-4190-8a7f-45f3d3ccb513\") " Jan 07 04:42:47 crc kubenswrapper[4980]: I0107 04:42:47.582822 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77h7b\" (UniqueName: \"kubernetes.io/projected/c6072af0-58b4-4190-8a7f-45f3d3ccb513-kube-api-access-77h7b\") pod \"c6072af0-58b4-4190-8a7f-45f3d3ccb513\" (UID: \"c6072af0-58b4-4190-8a7f-45f3d3ccb513\") " Jan 07 04:42:47 crc kubenswrapper[4980]: I0107 04:42:47.583853 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6072af0-58b4-4190-8a7f-45f3d3ccb513-utilities" (OuterVolumeSpecName: "utilities") pod "c6072af0-58b4-4190-8a7f-45f3d3ccb513" (UID: "c6072af0-58b4-4190-8a7f-45f3d3ccb513"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:42:47 crc kubenswrapper[4980]: I0107 04:42:47.584085 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6072af0-58b4-4190-8a7f-45f3d3ccb513-catalog-content\") pod \"c6072af0-58b4-4190-8a7f-45f3d3ccb513\" (UID: \"c6072af0-58b4-4190-8a7f-45f3d3ccb513\") " Jan 07 04:42:47 crc kubenswrapper[4980]: I0107 04:42:47.584709 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6072af0-58b4-4190-8a7f-45f3d3ccb513-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 04:42:47 crc kubenswrapper[4980]: I0107 04:42:47.659049 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6072af0-58b4-4190-8a7f-45f3d3ccb513-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6072af0-58b4-4190-8a7f-45f3d3ccb513" (UID: "c6072af0-58b4-4190-8a7f-45f3d3ccb513"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:42:47 crc kubenswrapper[4980]: I0107 04:42:47.687101 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6072af0-58b4-4190-8a7f-45f3d3ccb513-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 04:42:47 crc kubenswrapper[4980]: I0107 04:42:47.820761 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7n7hc" event={"ID":"c6072af0-58b4-4190-8a7f-45f3d3ccb513","Type":"ContainerDied","Data":"5b441ab85d2486f3557193a4e95aaaa7a5fb685957fb63836af90f66b9ebcb7b"} Jan 07 04:42:47 crc kubenswrapper[4980]: I0107 04:42:47.820919 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7n7hc" Jan 07 04:42:47 crc kubenswrapper[4980]: I0107 04:42:47.821140 4980 scope.go:117] "RemoveContainer" containerID="c2ff53440418996488e0606fafbd8e75cc261f24f589a099733d790f78fdca21" Jan 07 04:42:47 crc kubenswrapper[4980]: I0107 04:42:47.859928 4980 scope.go:117] "RemoveContainer" containerID="9ae638845310b3c3f04442895340b7f9393183407a845eeb4892ab064588e44c" Jan 07 04:42:47 crc kubenswrapper[4980]: I0107 04:42:47.970260 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6072af0-58b4-4190-8a7f-45f3d3ccb513-kube-api-access-77h7b" (OuterVolumeSpecName: "kube-api-access-77h7b") pod "c6072af0-58b4-4190-8a7f-45f3d3ccb513" (UID: "c6072af0-58b4-4190-8a7f-45f3d3ccb513"). InnerVolumeSpecName "kube-api-access-77h7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:42:47 crc kubenswrapper[4980]: I0107 04:42:47.991932 4980 scope.go:117] "RemoveContainer" containerID="6e43abcc078204ded962f094c43a92679cc0514acd6f2a4a60b963d3fc2d8735" Jan 07 04:42:47 crc kubenswrapper[4980]: I0107 04:42:47.993487 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77h7b\" (UniqueName: \"kubernetes.io/projected/c6072af0-58b4-4190-8a7f-45f3d3ccb513-kube-api-access-77h7b\") on node \"crc\" DevicePath \"\"" Jan 07 04:42:48 crc kubenswrapper[4980]: I0107 04:42:48.216565 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7n7hc"] Jan 07 04:42:48 crc kubenswrapper[4980]: I0107 04:42:48.230091 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7n7hc"] Jan 07 04:42:49 crc kubenswrapper[4980]: I0107 04:42:49.752719 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6072af0-58b4-4190-8a7f-45f3d3ccb513" path="/var/lib/kubelet/pods/c6072af0-58b4-4190-8a7f-45f3d3ccb513/volumes" Jan 07 04:42:50 crc kubenswrapper[4980]: I0107 04:42:50.072520 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k_1d43d97b-d62f-4e1e-b672-875c0dccca4e/util/0.log" Jan 07 04:42:50 crc kubenswrapper[4980]: I0107 04:42:50.233255 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k_1d43d97b-d62f-4e1e-b672-875c0dccca4e/util/0.log" Jan 07 04:42:50 crc kubenswrapper[4980]: I0107 04:42:50.247021 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k_1d43d97b-d62f-4e1e-b672-875c0dccca4e/pull/0.log" Jan 07 04:42:50 crc kubenswrapper[4980]: I0107 04:42:50.316395 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k_1d43d97b-d62f-4e1e-b672-875c0dccca4e/pull/0.log" Jan 07 04:42:50 crc kubenswrapper[4980]: I0107 04:42:50.447264 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k_1d43d97b-d62f-4e1e-b672-875c0dccca4e/extract/0.log" Jan 07 04:42:50 crc kubenswrapper[4980]: I0107 04:42:50.452245 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k_1d43d97b-d62f-4e1e-b672-875c0dccca4e/util/0.log" Jan 07 04:42:50 crc kubenswrapper[4980]: I0107 04:42:50.466682 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88654263b94365b076ed6bec4fe75d72c301f8429d4e20046a3ea11412rtm4k_1d43d97b-d62f-4e1e-b672-875c0dccca4e/pull/0.log" Jan 07 04:42:50 crc kubenswrapper[4980]: I0107 04:42:50.672200 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-f6f74d6db-mqsl2_2e8333e2-a664-4f9a-8ddb-07e31ddc3020/manager/0.log" Jan 07 04:42:50 crc kubenswrapper[4980]: I0107 04:42:50.716898 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78979fc445-qj7hn_81f997a4-1aea-45d5-bd2f-8e6d1e8fdc61/manager/0.log" Jan 07 04:42:50 crc kubenswrapper[4980]: I0107 04:42:50.868227 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-8rjjc_cc01f5c0-320a-4645-bb96-5bd8b6490e08/manager/0.log" Jan 07 04:42:50 crc kubenswrapper[4980]: I0107 04:42:50.983578 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7b549fc966-gnrv9_920991d2-089f-4864-8237-9684c6282a04/manager/0.log" Jan 07 04:42:51 crc kubenswrapper[4980]: I0107 04:42:51.059719 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-vghwj_93a3e6e3-bd9b-4883-923c-6d58ae83000d/manager/0.log" Jan 07 04:42:51 crc kubenswrapper[4980]: I0107 04:42:51.151776 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-r9bm7_509933ce-8dca-4f14-bdc4-a5f1608954b3/manager/0.log" Jan 07 04:42:51 crc kubenswrapper[4980]: I0107 04:42:51.342867 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-s5s7g_0b63f351-f7ac-44a4-8a65-a6357043af12/manager/0.log" Jan 07 04:42:51 crc kubenswrapper[4980]: I0107 04:42:51.458825 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-c5hk9_4223a956-7692-4bcc-8193-02312792b1f9/manager/0.log" Jan 07 04:42:51 crc kubenswrapper[4980]: I0107 04:42:51.583289 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-ndxck_d8d586b5-b752-4122-99af-ba4ce3bbad29/manager/0.log" Jan 07 04:42:51 crc kubenswrapper[4980]: I0107 04:42:51.588663 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-5jzx4_86933336-6f6c-4327-bcde-a4d1a6caba77/manager/0.log" Jan 07 04:42:51 crc kubenswrapper[4980]: I0107 04:42:51.740677 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-cc9rk_edf44de7-04e1-435c-a943-c47873d4e364/manager/0.log" Jan 07 04:42:51 crc kubenswrapper[4980]: I0107 04:42:51.808475 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-kpmrn_82ed1518-12d9-412b-86cc-03fbb1f74bd6/manager/0.log" Jan 07 04:42:51 crc kubenswrapper[4980]: I0107 04:42:51.969452 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-55fcj_96049d0d-7c90-4cab-a18c-5fbd4e9f8373/manager/0.log" Jan 07 04:42:52 crc kubenswrapper[4980]: I0107 04:42:52.014181 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-fd8dn_881d9164-37f7-48da-b203-a2e5db8e2d23/manager/0.log" Jan 07 04:42:52 crc kubenswrapper[4980]: I0107 04:42:52.160426 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-78948ddfd72mjfc_b74ebcfc-0b62-4a3c-a55d-c9a3d98f5b1c/manager/0.log" Jan 07 04:42:52 crc kubenswrapper[4980]: I0107 04:42:52.536695 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-54bc58988c-zrwgp_4af3dbe2-f463-48b6-9264-9d8ad4970648/operator/0.log" Jan 07 04:42:52 crc kubenswrapper[4980]: I0107 04:42:52.610695 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-q54qd_5cb72193-59c1-49a9-b1fd-26191d36f265/registry-server/0.log" Jan 07 04:42:52 crc kubenswrapper[4980]: I0107 04:42:52.878838 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-fdnhb_c86a562f-bdd6-4463-8edc-6ce72f41af16/manager/0.log" Jan 07 04:42:52 crc kubenswrapper[4980]: I0107 04:42:52.968598 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-d25kc_58abb189-9361-4eac-8663-55e110e21383/manager/0.log" Jan 07 04:42:53 crc kubenswrapper[4980]: I0107 04:42:53.071409 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fxr6h_e4c6355b-ca56-47e9-897e-ed6b641d456a/operator/0.log" Jan 07 04:42:53 crc kubenswrapper[4980]: I0107 04:42:53.201246 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-b4dq4_e3f2e1ae-fa58-4090-909d-4efdacb15545/manager/0.log" Jan 07 04:42:53 crc kubenswrapper[4980]: I0107 04:42:53.241056 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7bbf496545-vdwhj_4cfd0d7f-4c37-4cd5-a97a-ceff58bd52a3/manager/0.log" Jan 07 04:42:53 crc kubenswrapper[4980]: I0107 04:42:53.354115 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-68d988df55-d7p5d_28cf4151-f7be-4992-87f8-e34bf1d0a9c0/manager/0.log" Jan 07 04:42:53 crc kubenswrapper[4980]: I0107 04:42:53.410373 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-8qlqr_b6565eee-ab9b-4a1a-a5a8-6036df399731/manager/0.log" Jan 07 04:42:53 crc kubenswrapper[4980]: I0107 04:42:53.483456 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-qlpkh_c1ae6abf-8410-4816-b2a1-b6a9f0550eb2/manager/0.log" Jan 07 04:43:06 crc kubenswrapper[4980]: I0107 04:43:06.543797 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:43:06 crc kubenswrapper[4980]: I0107 04:43:06.544318 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:43:15 crc kubenswrapper[4980]: I0107 04:43:15.835355 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xtj2h_7e4e8bcd-d566-43ed-ba1d-e5c367faca7d/control-plane-machine-set-operator/0.log" Jan 07 04:43:16 crc kubenswrapper[4980]: I0107 04:43:16.006242 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-sjzgd_b7a65dcf-9933-4d66-92c4-e1c9d9e209e9/machine-api-operator/0.log" Jan 07 04:43:16 crc kubenswrapper[4980]: I0107 04:43:16.019675 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-sjzgd_b7a65dcf-9933-4d66-92c4-e1c9d9e209e9/kube-rbac-proxy/0.log" Jan 07 04:43:33 crc kubenswrapper[4980]: I0107 04:43:33.173127 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vchrs_a5762e00-3e81-401d-8365-8d6791ecbf4f/cert-manager-controller/0.log" Jan 07 04:43:33 crc kubenswrapper[4980]: I0107 04:43:33.384089 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-dfqdk_c7ebfede-1363-495d-b143-2e8db44394c0/cert-manager-cainjector/0.log" Jan 07 04:43:33 crc kubenswrapper[4980]: I0107 04:43:33.421624 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-25nfw_59f9bc30-5a23-4161-95fa-68d941208670/cert-manager-webhook/0.log" Jan 07 04:43:36 crc kubenswrapper[4980]: I0107 04:43:36.543653 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:43:36 crc kubenswrapper[4980]: I0107 04:43:36.544149 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:43:49 crc kubenswrapper[4980]: I0107 04:43:49.889855 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-rc5d6_680a6c39-957f-43ff-82e4-c70f626c14c6/nmstate-console-plugin/0.log" Jan 07 04:43:49 crc kubenswrapper[4980]: I0107 04:43:49.909796 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-fc77g_c5c3ce37-b71d-4353-b725-a82d5aeb2f81/nmstate-handler/0.log" Jan 07 04:43:50 crc kubenswrapper[4980]: I0107 04:43:50.071857 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-gd4b8_550a6236-4f98-4b9a-ad9d-bce2a985a853/kube-rbac-proxy/0.log" Jan 07 04:43:50 crc kubenswrapper[4980]: I0107 04:43:50.108189 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-gd4b8_550a6236-4f98-4b9a-ad9d-bce2a985a853/nmstate-metrics/0.log" Jan 07 04:43:50 crc kubenswrapper[4980]: I0107 04:43:50.244033 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-jjfdp_b6737cd2-b163-4a8a-a674-54ba3a715f91/nmstate-operator/0.log" Jan 07 04:43:50 crc kubenswrapper[4980]: I0107 04:43:50.304093 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-mspkv_63e6b18c-21c5-4d1d-85b9-0db97630b4b8/nmstate-webhook/0.log" Jan 07 04:44:06 crc kubenswrapper[4980]: I0107 04:44:06.543520 4980 patch_prober.go:28] interesting pod/machine-config-daemon-hzlt6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 04:44:06 crc kubenswrapper[4980]: I0107 04:44:06.544077 4980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 04:44:06 crc kubenswrapper[4980]: I0107 04:44:06.544123 4980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" Jan 07 04:44:06 crc kubenswrapper[4980]: I0107 04:44:06.544752 4980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766"} pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 04:44:06 crc kubenswrapper[4980]: I0107 04:44:06.544794 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerName="machine-config-daemon" containerID="cri-o://05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" gracePeriod=600 Jan 07 04:44:06 crc kubenswrapper[4980]: E0107 04:44:06.686623 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:44:07 crc kubenswrapper[4980]: I0107 04:44:07.272843 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-vg2gl_fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3/kube-rbac-proxy/0.log" Jan 07 04:44:07 crc kubenswrapper[4980]: I0107 04:44:07.325846 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-vg2gl_fc1d0de6-6d92-4b24-b71b-7f2f04a93ca3/controller/0.log" Jan 07 04:44:07 crc kubenswrapper[4980]: I0107 04:44:07.428328 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-frr-files/0.log" Jan 07 04:44:07 crc kubenswrapper[4980]: I0107 04:44:07.611588 4980 generic.go:334] "Generic (PLEG): container finished" podID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" exitCode=0 Jan 07 04:44:07 crc kubenswrapper[4980]: I0107 04:44:07.611603 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerDied","Data":"05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766"} Jan 07 04:44:07 crc kubenswrapper[4980]: I0107 04:44:07.611679 4980 scope.go:117] "RemoveContainer" containerID="734a74dc3c192c49aa206faa107f7ca49bb64ac582050f82bf6fdf1e63cf5035" Jan 07 04:44:07 crc kubenswrapper[4980]: I0107 04:44:07.612075 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-reloader/0.log" Jan 07 04:44:07 crc kubenswrapper[4980]: I0107 04:44:07.612387 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:44:07 crc kubenswrapper[4980]: E0107 04:44:07.612717 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:44:07 crc kubenswrapper[4980]: I0107 04:44:07.680850 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-frr-files/0.log" Jan 07 04:44:07 crc kubenswrapper[4980]: I0107 04:44:07.686183 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-metrics/0.log" Jan 07 04:44:07 crc kubenswrapper[4980]: I0107 04:44:07.712781 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-reloader/0.log" Jan 07 04:44:07 crc kubenswrapper[4980]: I0107 04:44:07.916693 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-frr-files/0.log" Jan 07 04:44:07 crc kubenswrapper[4980]: I0107 04:44:07.918297 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-metrics/0.log" Jan 07 04:44:07 crc kubenswrapper[4980]: I0107 04:44:07.920696 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-metrics/0.log" Jan 07 04:44:07 crc kubenswrapper[4980]: I0107 04:44:07.923519 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-reloader/0.log" Jan 07 04:44:08 crc kubenswrapper[4980]: I0107 04:44:08.107392 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/controller/0.log" Jan 07 04:44:08 crc kubenswrapper[4980]: I0107 04:44:08.116558 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-frr-files/0.log" Jan 07 04:44:08 crc kubenswrapper[4980]: I0107 04:44:08.151679 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-reloader/0.log" Jan 07 04:44:08 crc kubenswrapper[4980]: I0107 04:44:08.168559 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/cp-metrics/0.log" Jan 07 04:44:08 crc kubenswrapper[4980]: I0107 04:44:08.311922 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/frr-metrics/0.log" Jan 07 04:44:08 crc kubenswrapper[4980]: I0107 04:44:08.340500 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/kube-rbac-proxy-frr/0.log" Jan 07 04:44:08 crc kubenswrapper[4980]: I0107 04:44:08.365899 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/kube-rbac-proxy/0.log" Jan 07 04:44:08 crc kubenswrapper[4980]: I0107 04:44:08.522186 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/reloader/0.log" Jan 07 04:44:08 crc kubenswrapper[4980]: I0107 04:44:08.572489 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-bn2x7_dc6c1183-144b-4b67-baad-9e04c4492453/frr-k8s-webhook-server/0.log" Jan 07 04:44:08 crc kubenswrapper[4980]: I0107 04:44:08.792832 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-65b595589f-ljh57_a2680c24-a9d4-4daa-9ed0-3bc391695662/manager/0.log" Jan 07 04:44:08 crc kubenswrapper[4980]: I0107 04:44:08.970112 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-65b4bf7cb4-dm7j7_db5d65a6-7f55-491f-8ea1-e6f3c1715c00/webhook-server/0.log" Jan 07 04:44:09 crc kubenswrapper[4980]: I0107 04:44:09.083398 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7qwgg_bf49087b-cf7a-41cf-85a4-e76d00ae1381/kube-rbac-proxy/0.log" Jan 07 04:44:09 crc kubenswrapper[4980]: I0107 04:44:09.662720 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7qwgg_bf49087b-cf7a-41cf-85a4-e76d00ae1381/speaker/0.log" Jan 07 04:44:09 crc kubenswrapper[4980]: I0107 04:44:09.820690 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-24j8g_3f131e38-245d-400d-8a7b-f9c7dc486db8/frr/0.log" Jan 07 04:44:20 crc kubenswrapper[4980]: I0107 04:44:20.736124 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:44:20 crc kubenswrapper[4980]: E0107 04:44:20.736884 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:44:25 crc kubenswrapper[4980]: I0107 04:44:25.700744 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l_8424289d-d257-486f-a82a-8d8cec374808/util/0.log" Jan 07 04:44:26 crc kubenswrapper[4980]: I0107 04:44:26.072689 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l_8424289d-d257-486f-a82a-8d8cec374808/util/0.log" Jan 07 04:44:26 crc kubenswrapper[4980]: I0107 04:44:26.110616 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l_8424289d-d257-486f-a82a-8d8cec374808/pull/0.log" Jan 07 04:44:26 crc kubenswrapper[4980]: I0107 04:44:26.112698 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l_8424289d-d257-486f-a82a-8d8cec374808/pull/0.log" Jan 07 04:44:26 crc kubenswrapper[4980]: I0107 04:44:26.247919 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l_8424289d-d257-486f-a82a-8d8cec374808/util/0.log" Jan 07 04:44:26 crc kubenswrapper[4980]: I0107 04:44:26.285486 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l_8424289d-d257-486f-a82a-8d8cec374808/extract/0.log" Jan 07 04:44:26 crc kubenswrapper[4980]: I0107 04:44:26.292534 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xlq8l_8424289d-d257-486f-a82a-8d8cec374808/pull/0.log" Jan 07 04:44:26 crc kubenswrapper[4980]: I0107 04:44:26.433299 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l_48de5406-371a-47e2-90d8-d6fd88506301/util/0.log" Jan 07 04:44:26 crc kubenswrapper[4980]: I0107 04:44:26.593379 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l_48de5406-371a-47e2-90d8-d6fd88506301/pull/0.log" Jan 07 04:44:26 crc kubenswrapper[4980]: I0107 04:44:26.600312 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l_48de5406-371a-47e2-90d8-d6fd88506301/pull/0.log" Jan 07 04:44:26 crc kubenswrapper[4980]: I0107 04:44:26.643266 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l_48de5406-371a-47e2-90d8-d6fd88506301/util/0.log" Jan 07 04:44:26 crc kubenswrapper[4980]: I0107 04:44:26.802820 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l_48de5406-371a-47e2-90d8-d6fd88506301/pull/0.log" Jan 07 04:44:26 crc kubenswrapper[4980]: I0107 04:44:26.815988 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l_48de5406-371a-47e2-90d8-d6fd88506301/util/0.log" Jan 07 04:44:26 crc kubenswrapper[4980]: I0107 04:44:26.840935 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8sht7l_48de5406-371a-47e2-90d8-d6fd88506301/extract/0.log" Jan 07 04:44:27 crc kubenswrapper[4980]: I0107 04:44:27.007641 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcz5z_acd8b20e-5d5b-4f22-b29e-109b6e039ad9/extract-utilities/0.log" Jan 07 04:44:27 crc kubenswrapper[4980]: I0107 04:44:27.153630 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcz5z_acd8b20e-5d5b-4f22-b29e-109b6e039ad9/extract-content/0.log" Jan 07 04:44:27 crc kubenswrapper[4980]: I0107 04:44:27.186887 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcz5z_acd8b20e-5d5b-4f22-b29e-109b6e039ad9/extract-content/0.log" Jan 07 04:44:27 crc kubenswrapper[4980]: I0107 04:44:27.196885 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcz5z_acd8b20e-5d5b-4f22-b29e-109b6e039ad9/extract-utilities/0.log" Jan 07 04:44:27 crc kubenswrapper[4980]: I0107 04:44:27.391507 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcz5z_acd8b20e-5d5b-4f22-b29e-109b6e039ad9/extract-content/0.log" Jan 07 04:44:27 crc kubenswrapper[4980]: I0107 04:44:27.401210 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcz5z_acd8b20e-5d5b-4f22-b29e-109b6e039ad9/extract-utilities/0.log" Jan 07 04:44:27 crc kubenswrapper[4980]: I0107 04:44:27.619329 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pgfdc_d57ca4dc-5247-4b77-808b-c7e095b4b167/extract-utilities/0.log" Jan 07 04:44:27 crc kubenswrapper[4980]: I0107 04:44:27.912432 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcz5z_acd8b20e-5d5b-4f22-b29e-109b6e039ad9/registry-server/0.log" Jan 07 04:44:27 crc kubenswrapper[4980]: I0107 04:44:27.923424 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pgfdc_d57ca4dc-5247-4b77-808b-c7e095b4b167/extract-content/0.log" Jan 07 04:44:27 crc kubenswrapper[4980]: I0107 04:44:27.950181 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pgfdc_d57ca4dc-5247-4b77-808b-c7e095b4b167/extract-content/0.log" Jan 07 04:44:27 crc kubenswrapper[4980]: I0107 04:44:27.953901 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pgfdc_d57ca4dc-5247-4b77-808b-c7e095b4b167/extract-utilities/0.log" Jan 07 04:44:28 crc kubenswrapper[4980]: I0107 04:44:28.182533 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pgfdc_d57ca4dc-5247-4b77-808b-c7e095b4b167/extract-content/0.log" Jan 07 04:44:28 crc kubenswrapper[4980]: I0107 04:44:28.190689 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pgfdc_d57ca4dc-5247-4b77-808b-c7e095b4b167/extract-utilities/0.log" Jan 07 04:44:28 crc kubenswrapper[4980]: I0107 04:44:28.478131 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qdt7j_4a3ee567-8dae-40c8-9f42-6a01ec72b480/extract-utilities/0.log" Jan 07 04:44:28 crc kubenswrapper[4980]: I0107 04:44:28.518109 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-t5mfp_cbc2df67-d00a-4200-b46f-b9eca0da0f4f/marketplace-operator/0.log" Jan 07 04:44:28 crc kubenswrapper[4980]: I0107 04:44:28.661092 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pgfdc_d57ca4dc-5247-4b77-808b-c7e095b4b167/registry-server/0.log" Jan 07 04:44:28 crc kubenswrapper[4980]: I0107 04:44:28.685725 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qdt7j_4a3ee567-8dae-40c8-9f42-6a01ec72b480/extract-utilities/0.log" Jan 07 04:44:28 crc kubenswrapper[4980]: I0107 04:44:28.756301 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qdt7j_4a3ee567-8dae-40c8-9f42-6a01ec72b480/extract-content/0.log" Jan 07 04:44:28 crc kubenswrapper[4980]: I0107 04:44:28.792886 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qdt7j_4a3ee567-8dae-40c8-9f42-6a01ec72b480/extract-content/0.log" Jan 07 04:44:28 crc kubenswrapper[4980]: I0107 04:44:28.897868 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qdt7j_4a3ee567-8dae-40c8-9f42-6a01ec72b480/extract-utilities/0.log" Jan 07 04:44:28 crc kubenswrapper[4980]: I0107 04:44:28.973835 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qdt7j_4a3ee567-8dae-40c8-9f42-6a01ec72b480/extract-content/0.log" Jan 07 04:44:29 crc kubenswrapper[4980]: I0107 04:44:29.103429 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5h6xq_edc1e23f-0cd2-4ab5-be99-0e90aa809529/extract-utilities/0.log" Jan 07 04:44:29 crc kubenswrapper[4980]: I0107 04:44:29.213448 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qdt7j_4a3ee567-8dae-40c8-9f42-6a01ec72b480/registry-server/0.log" Jan 07 04:44:29 crc kubenswrapper[4980]: I0107 04:44:29.309623 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5h6xq_edc1e23f-0cd2-4ab5-be99-0e90aa809529/extract-content/0.log" Jan 07 04:44:29 crc kubenswrapper[4980]: I0107 04:44:29.332400 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5h6xq_edc1e23f-0cd2-4ab5-be99-0e90aa809529/extract-utilities/0.log" Jan 07 04:44:29 crc kubenswrapper[4980]: I0107 04:44:29.353712 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5h6xq_edc1e23f-0cd2-4ab5-be99-0e90aa809529/extract-content/0.log" Jan 07 04:44:29 crc kubenswrapper[4980]: I0107 04:44:29.513224 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5h6xq_edc1e23f-0cd2-4ab5-be99-0e90aa809529/extract-utilities/0.log" Jan 07 04:44:29 crc kubenswrapper[4980]: I0107 04:44:29.556120 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5h6xq_edc1e23f-0cd2-4ab5-be99-0e90aa809529/extract-content/0.log" Jan 07 04:44:30 crc kubenswrapper[4980]: I0107 04:44:30.021963 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5h6xq_edc1e23f-0cd2-4ab5-be99-0e90aa809529/registry-server/0.log" Jan 07 04:44:31 crc kubenswrapper[4980]: I0107 04:44:31.736272 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:44:31 crc kubenswrapper[4980]: E0107 04:44:31.737089 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:44:43 crc kubenswrapper[4980]: I0107 04:44:43.747689 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:44:43 crc kubenswrapper[4980]: E0107 04:44:43.748799 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:44:54 crc kubenswrapper[4980]: I0107 04:44:54.736342 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:44:54 crc kubenswrapper[4980]: E0107 04:44:54.737753 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.202458 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s"] Jan 07 04:45:00 crc kubenswrapper[4980]: E0107 04:45:00.203404 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6072af0-58b4-4190-8a7f-45f3d3ccb513" containerName="extract-content" Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.203416 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6072af0-58b4-4190-8a7f-45f3d3ccb513" containerName="extract-content" Jan 07 04:45:00 crc kubenswrapper[4980]: E0107 04:45:00.203446 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6072af0-58b4-4190-8a7f-45f3d3ccb513" containerName="extract-utilities" Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.203453 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6072af0-58b4-4190-8a7f-45f3d3ccb513" containerName="extract-utilities" Jan 07 04:45:00 crc kubenswrapper[4980]: E0107 04:45:00.203467 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6072af0-58b4-4190-8a7f-45f3d3ccb513" containerName="registry-server" Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.203473 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6072af0-58b4-4190-8a7f-45f3d3ccb513" containerName="registry-server" Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.203659 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6072af0-58b4-4190-8a7f-45f3d3ccb513" containerName="registry-server" Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.204273 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s" Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.206364 4980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.206381 4980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.216844 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s"] Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.246045 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfgg4\" (UniqueName: \"kubernetes.io/projected/4ff9da04-7a95-45f1-be33-752061606829-kube-api-access-tfgg4\") pod \"collect-profiles-29462685-jlr2s\" (UID: \"4ff9da04-7a95-45f1-be33-752061606829\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s" Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.246462 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ff9da04-7a95-45f1-be33-752061606829-secret-volume\") pod \"collect-profiles-29462685-jlr2s\" (UID: \"4ff9da04-7a95-45f1-be33-752061606829\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s" Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.246790 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ff9da04-7a95-45f1-be33-752061606829-config-volume\") pod \"collect-profiles-29462685-jlr2s\" (UID: \"4ff9da04-7a95-45f1-be33-752061606829\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s" Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.349067 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ff9da04-7a95-45f1-be33-752061606829-config-volume\") pod \"collect-profiles-29462685-jlr2s\" (UID: \"4ff9da04-7a95-45f1-be33-752061606829\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s" Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.349140 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfgg4\" (UniqueName: \"kubernetes.io/projected/4ff9da04-7a95-45f1-be33-752061606829-kube-api-access-tfgg4\") pod \"collect-profiles-29462685-jlr2s\" (UID: \"4ff9da04-7a95-45f1-be33-752061606829\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s" Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.349220 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ff9da04-7a95-45f1-be33-752061606829-secret-volume\") pod \"collect-profiles-29462685-jlr2s\" (UID: \"4ff9da04-7a95-45f1-be33-752061606829\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s" Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.350198 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ff9da04-7a95-45f1-be33-752061606829-config-volume\") pod \"collect-profiles-29462685-jlr2s\" (UID: \"4ff9da04-7a95-45f1-be33-752061606829\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s" Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.359236 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ff9da04-7a95-45f1-be33-752061606829-secret-volume\") pod \"collect-profiles-29462685-jlr2s\" (UID: \"4ff9da04-7a95-45f1-be33-752061606829\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s" Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.371071 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfgg4\" (UniqueName: \"kubernetes.io/projected/4ff9da04-7a95-45f1-be33-752061606829-kube-api-access-tfgg4\") pod \"collect-profiles-29462685-jlr2s\" (UID: \"4ff9da04-7a95-45f1-be33-752061606829\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s" Jan 07 04:45:00 crc kubenswrapper[4980]: I0107 04:45:00.564629 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s" Jan 07 04:45:01 crc kubenswrapper[4980]: I0107 04:45:01.124057 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s"] Jan 07 04:45:02 crc kubenswrapper[4980]: I0107 04:45:02.140022 4980 generic.go:334] "Generic (PLEG): container finished" podID="4ff9da04-7a95-45f1-be33-752061606829" containerID="35373ec5d3ad47ea008cb31fc82119b6e7d05731628c2c678e684ddabe55b817" exitCode=0 Jan 07 04:45:02 crc kubenswrapper[4980]: I0107 04:45:02.140097 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s" event={"ID":"4ff9da04-7a95-45f1-be33-752061606829","Type":"ContainerDied","Data":"35373ec5d3ad47ea008cb31fc82119b6e7d05731628c2c678e684ddabe55b817"} Jan 07 04:45:02 crc kubenswrapper[4980]: I0107 04:45:02.140262 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s" event={"ID":"4ff9da04-7a95-45f1-be33-752061606829","Type":"ContainerStarted","Data":"390adf103fff6e8311e3a629a115087e440232ac746ab28eda9f8e143c0a071f"} Jan 07 04:45:03 crc kubenswrapper[4980]: I0107 04:45:03.609873 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s" Jan 07 04:45:03 crc kubenswrapper[4980]: I0107 04:45:03.712252 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfgg4\" (UniqueName: \"kubernetes.io/projected/4ff9da04-7a95-45f1-be33-752061606829-kube-api-access-tfgg4\") pod \"4ff9da04-7a95-45f1-be33-752061606829\" (UID: \"4ff9da04-7a95-45f1-be33-752061606829\") " Jan 07 04:45:03 crc kubenswrapper[4980]: I0107 04:45:03.712330 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ff9da04-7a95-45f1-be33-752061606829-secret-volume\") pod \"4ff9da04-7a95-45f1-be33-752061606829\" (UID: \"4ff9da04-7a95-45f1-be33-752061606829\") " Jan 07 04:45:03 crc kubenswrapper[4980]: I0107 04:45:03.712355 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ff9da04-7a95-45f1-be33-752061606829-config-volume\") pod \"4ff9da04-7a95-45f1-be33-752061606829\" (UID: \"4ff9da04-7a95-45f1-be33-752061606829\") " Jan 07 04:45:03 crc kubenswrapper[4980]: I0107 04:45:03.713284 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ff9da04-7a95-45f1-be33-752061606829-config-volume" (OuterVolumeSpecName: "config-volume") pod "4ff9da04-7a95-45f1-be33-752061606829" (UID: "4ff9da04-7a95-45f1-be33-752061606829"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 07 04:45:03 crc kubenswrapper[4980]: I0107 04:45:03.718715 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ff9da04-7a95-45f1-be33-752061606829-kube-api-access-tfgg4" (OuterVolumeSpecName: "kube-api-access-tfgg4") pod "4ff9da04-7a95-45f1-be33-752061606829" (UID: "4ff9da04-7a95-45f1-be33-752061606829"). InnerVolumeSpecName "kube-api-access-tfgg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:45:03 crc kubenswrapper[4980]: I0107 04:45:03.721673 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ff9da04-7a95-45f1-be33-752061606829-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4ff9da04-7a95-45f1-be33-752061606829" (UID: "4ff9da04-7a95-45f1-be33-752061606829"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 07 04:45:03 crc kubenswrapper[4980]: I0107 04:45:03.815721 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfgg4\" (UniqueName: \"kubernetes.io/projected/4ff9da04-7a95-45f1-be33-752061606829-kube-api-access-tfgg4\") on node \"crc\" DevicePath \"\"" Jan 07 04:45:03 crc kubenswrapper[4980]: I0107 04:45:03.815758 4980 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ff9da04-7a95-45f1-be33-752061606829-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 07 04:45:03 crc kubenswrapper[4980]: I0107 04:45:03.815772 4980 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ff9da04-7a95-45f1-be33-752061606829-config-volume\") on node \"crc\" DevicePath \"\"" Jan 07 04:45:04 crc kubenswrapper[4980]: I0107 04:45:04.159767 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s" event={"ID":"4ff9da04-7a95-45f1-be33-752061606829","Type":"ContainerDied","Data":"390adf103fff6e8311e3a629a115087e440232ac746ab28eda9f8e143c0a071f"} Jan 07 04:45:04 crc kubenswrapper[4980]: I0107 04:45:04.159814 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="390adf103fff6e8311e3a629a115087e440232ac746ab28eda9f8e143c0a071f" Jan 07 04:45:04 crc kubenswrapper[4980]: I0107 04:45:04.159823 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462685-jlr2s" Jan 07 04:45:04 crc kubenswrapper[4980]: I0107 04:45:04.676021 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598"] Jan 07 04:45:04 crc kubenswrapper[4980]: I0107 04:45:04.682262 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462640-x2598"] Jan 07 04:45:05 crc kubenswrapper[4980]: I0107 04:45:05.744598 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee618e14-44ba-4a43-9ce3-b933ddc708fa" path="/var/lib/kubelet/pods/ee618e14-44ba-4a43-9ce3-b933ddc708fa/volumes" Jan 07 04:45:06 crc kubenswrapper[4980]: I0107 04:45:06.736828 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:45:06 crc kubenswrapper[4980]: E0107 04:45:06.737585 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:45:19 crc kubenswrapper[4980]: I0107 04:45:19.735857 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:45:19 crc kubenswrapper[4980]: E0107 04:45:19.736912 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:45:30 crc kubenswrapper[4980]: I0107 04:45:30.736190 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:45:30 crc kubenswrapper[4980]: E0107 04:45:30.737647 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:45:41 crc kubenswrapper[4980]: I0107 04:45:41.736371 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:45:41 crc kubenswrapper[4980]: E0107 04:45:41.737597 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:45:43 crc kubenswrapper[4980]: I0107 04:45:43.293722 4980 scope.go:117] "RemoveContainer" containerID="8a217cb936d622019ce70ff748e7e3e92e2001f856611ef504e4e4e030cb2fde" Jan 07 04:45:55 crc kubenswrapper[4980]: I0107 04:45:55.736616 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:45:55 crc kubenswrapper[4980]: E0107 04:45:55.738032 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:46:07 crc kubenswrapper[4980]: I0107 04:46:07.947632 4980 generic.go:334] "Generic (PLEG): container finished" podID="859120b5-d837-4e48-ac1a-c8090e7793e0" containerID="242073bf2ec7127a6869585d6d0a1bbade886fc25af70b131e20ac7a29b2aab4" exitCode=0 Jan 07 04:46:07 crc kubenswrapper[4980]: I0107 04:46:07.947675 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwgmc/must-gather-qvxjz" event={"ID":"859120b5-d837-4e48-ac1a-c8090e7793e0","Type":"ContainerDied","Data":"242073bf2ec7127a6869585d6d0a1bbade886fc25af70b131e20ac7a29b2aab4"} Jan 07 04:46:07 crc kubenswrapper[4980]: I0107 04:46:07.949005 4980 scope.go:117] "RemoveContainer" containerID="242073bf2ec7127a6869585d6d0a1bbade886fc25af70b131e20ac7a29b2aab4" Jan 07 04:46:08 crc kubenswrapper[4980]: I0107 04:46:08.077954 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wwgmc_must-gather-qvxjz_859120b5-d837-4e48-ac1a-c8090e7793e0/gather/0.log" Jan 07 04:46:10 crc kubenswrapper[4980]: I0107 04:46:10.737663 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:46:10 crc kubenswrapper[4980]: E0107 04:46:10.740150 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:46:19 crc kubenswrapper[4980]: I0107 04:46:19.449518 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wwgmc/must-gather-qvxjz"] Jan 07 04:46:19 crc kubenswrapper[4980]: I0107 04:46:19.450457 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-wwgmc/must-gather-qvxjz" podUID="859120b5-d837-4e48-ac1a-c8090e7793e0" containerName="copy" containerID="cri-o://9c5b8bb791fbf04e6f92096fb6a8cc38f32ff4b3362457b2ddc880e9d6ea4c33" gracePeriod=2 Jan 07 04:46:19 crc kubenswrapper[4980]: I0107 04:46:19.460978 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wwgmc/must-gather-qvxjz"] Jan 07 04:46:19 crc kubenswrapper[4980]: I0107 04:46:19.915186 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wwgmc_must-gather-qvxjz_859120b5-d837-4e48-ac1a-c8090e7793e0/copy/0.log" Jan 07 04:46:19 crc kubenswrapper[4980]: I0107 04:46:19.915941 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwgmc/must-gather-qvxjz" Jan 07 04:46:20 crc kubenswrapper[4980]: I0107 04:46:20.057325 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/859120b5-d837-4e48-ac1a-c8090e7793e0-must-gather-output\") pod \"859120b5-d837-4e48-ac1a-c8090e7793e0\" (UID: \"859120b5-d837-4e48-ac1a-c8090e7793e0\") " Jan 07 04:46:20 crc kubenswrapper[4980]: I0107 04:46:20.057454 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72swr\" (UniqueName: \"kubernetes.io/projected/859120b5-d837-4e48-ac1a-c8090e7793e0-kube-api-access-72swr\") pod \"859120b5-d837-4e48-ac1a-c8090e7793e0\" (UID: \"859120b5-d837-4e48-ac1a-c8090e7793e0\") " Jan 07 04:46:20 crc kubenswrapper[4980]: I0107 04:46:20.072075 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859120b5-d837-4e48-ac1a-c8090e7793e0-kube-api-access-72swr" (OuterVolumeSpecName: "kube-api-access-72swr") pod "859120b5-d837-4e48-ac1a-c8090e7793e0" (UID: "859120b5-d837-4e48-ac1a-c8090e7793e0"). InnerVolumeSpecName "kube-api-access-72swr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:46:20 crc kubenswrapper[4980]: I0107 04:46:20.089412 4980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wwgmc_must-gather-qvxjz_859120b5-d837-4e48-ac1a-c8090e7793e0/copy/0.log" Jan 07 04:46:20 crc kubenswrapper[4980]: I0107 04:46:20.089875 4980 generic.go:334] "Generic (PLEG): container finished" podID="859120b5-d837-4e48-ac1a-c8090e7793e0" containerID="9c5b8bb791fbf04e6f92096fb6a8cc38f32ff4b3362457b2ddc880e9d6ea4c33" exitCode=143 Jan 07 04:46:20 crc kubenswrapper[4980]: I0107 04:46:20.089937 4980 scope.go:117] "RemoveContainer" containerID="9c5b8bb791fbf04e6f92096fb6a8cc38f32ff4b3362457b2ddc880e9d6ea4c33" Jan 07 04:46:20 crc kubenswrapper[4980]: I0107 04:46:20.090176 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwgmc/must-gather-qvxjz" Jan 07 04:46:20 crc kubenswrapper[4980]: I0107 04:46:20.128298 4980 scope.go:117] "RemoveContainer" containerID="242073bf2ec7127a6869585d6d0a1bbade886fc25af70b131e20ac7a29b2aab4" Jan 07 04:46:20 crc kubenswrapper[4980]: I0107 04:46:20.160015 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72swr\" (UniqueName: \"kubernetes.io/projected/859120b5-d837-4e48-ac1a-c8090e7793e0-kube-api-access-72swr\") on node \"crc\" DevicePath \"\"" Jan 07 04:46:20 crc kubenswrapper[4980]: I0107 04:46:20.228978 4980 scope.go:117] "RemoveContainer" containerID="9c5b8bb791fbf04e6f92096fb6a8cc38f32ff4b3362457b2ddc880e9d6ea4c33" Jan 07 04:46:20 crc kubenswrapper[4980]: E0107 04:46:20.229541 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c5b8bb791fbf04e6f92096fb6a8cc38f32ff4b3362457b2ddc880e9d6ea4c33\": container with ID starting with 9c5b8bb791fbf04e6f92096fb6a8cc38f32ff4b3362457b2ddc880e9d6ea4c33 not found: ID does not exist" containerID="9c5b8bb791fbf04e6f92096fb6a8cc38f32ff4b3362457b2ddc880e9d6ea4c33" Jan 07 04:46:20 crc kubenswrapper[4980]: I0107 04:46:20.229614 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c5b8bb791fbf04e6f92096fb6a8cc38f32ff4b3362457b2ddc880e9d6ea4c33"} err="failed to get container status \"9c5b8bb791fbf04e6f92096fb6a8cc38f32ff4b3362457b2ddc880e9d6ea4c33\": rpc error: code = NotFound desc = could not find container \"9c5b8bb791fbf04e6f92096fb6a8cc38f32ff4b3362457b2ddc880e9d6ea4c33\": container with ID starting with 9c5b8bb791fbf04e6f92096fb6a8cc38f32ff4b3362457b2ddc880e9d6ea4c33 not found: ID does not exist" Jan 07 04:46:20 crc kubenswrapper[4980]: I0107 04:46:20.229673 4980 scope.go:117] "RemoveContainer" containerID="242073bf2ec7127a6869585d6d0a1bbade886fc25af70b131e20ac7a29b2aab4" Jan 07 04:46:20 crc kubenswrapper[4980]: E0107 04:46:20.230144 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"242073bf2ec7127a6869585d6d0a1bbade886fc25af70b131e20ac7a29b2aab4\": container with ID starting with 242073bf2ec7127a6869585d6d0a1bbade886fc25af70b131e20ac7a29b2aab4 not found: ID does not exist" containerID="242073bf2ec7127a6869585d6d0a1bbade886fc25af70b131e20ac7a29b2aab4" Jan 07 04:46:20 crc kubenswrapper[4980]: I0107 04:46:20.230179 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"242073bf2ec7127a6869585d6d0a1bbade886fc25af70b131e20ac7a29b2aab4"} err="failed to get container status \"242073bf2ec7127a6869585d6d0a1bbade886fc25af70b131e20ac7a29b2aab4\": rpc error: code = NotFound desc = could not find container \"242073bf2ec7127a6869585d6d0a1bbade886fc25af70b131e20ac7a29b2aab4\": container with ID starting with 242073bf2ec7127a6869585d6d0a1bbade886fc25af70b131e20ac7a29b2aab4 not found: ID does not exist" Jan 07 04:46:20 crc kubenswrapper[4980]: I0107 04:46:20.276490 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/859120b5-d837-4e48-ac1a-c8090e7793e0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "859120b5-d837-4e48-ac1a-c8090e7793e0" (UID: "859120b5-d837-4e48-ac1a-c8090e7793e0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:46:20 crc kubenswrapper[4980]: I0107 04:46:20.364619 4980 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/859120b5-d837-4e48-ac1a-c8090e7793e0-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 07 04:46:21 crc kubenswrapper[4980]: I0107 04:46:21.745860 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="859120b5-d837-4e48-ac1a-c8090e7793e0" path="/var/lib/kubelet/pods/859120b5-d837-4e48-ac1a-c8090e7793e0/volumes" Jan 07 04:46:22 crc kubenswrapper[4980]: I0107 04:46:22.736638 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:46:22 crc kubenswrapper[4980]: E0107 04:46:22.737248 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:46:36 crc kubenswrapper[4980]: I0107 04:46:36.736070 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:46:36 crc kubenswrapper[4980]: E0107 04:46:36.736744 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:46:51 crc kubenswrapper[4980]: I0107 04:46:51.736624 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:46:51 crc kubenswrapper[4980]: E0107 04:46:51.737715 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:47:03 crc kubenswrapper[4980]: I0107 04:47:03.750885 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:47:03 crc kubenswrapper[4980]: E0107 04:47:03.752045 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:47:18 crc kubenswrapper[4980]: I0107 04:47:18.735675 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:47:18 crc kubenswrapper[4980]: E0107 04:47:18.737922 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:47:32 crc kubenswrapper[4980]: I0107 04:47:32.735720 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:47:32 crc kubenswrapper[4980]: E0107 04:47:32.736323 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:47:43 crc kubenswrapper[4980]: I0107 04:47:43.467416 4980 scope.go:117] "RemoveContainer" containerID="af0dd8d556fc7b2c2d0c3e3a8b109eae59613997f6497620f2c5f4385003e7ae" Jan 07 04:47:44 crc kubenswrapper[4980]: I0107 04:47:44.006196 4980 scope.go:117] "RemoveContainer" containerID="2b370deb3921edc7458d87dad0e8fcaa79e880a1ab03095d1eba6a3ee12530b8" Jan 07 04:47:47 crc kubenswrapper[4980]: I0107 04:47:47.737811 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:47:47 crc kubenswrapper[4980]: E0107 04:47:47.739269 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:48:00 crc kubenswrapper[4980]: I0107 04:48:00.736473 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:48:00 crc kubenswrapper[4980]: E0107 04:48:00.737603 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:48:12 crc kubenswrapper[4980]: I0107 04:48:12.736834 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:48:12 crc kubenswrapper[4980]: E0107 04:48:12.737887 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:48:27 crc kubenswrapper[4980]: I0107 04:48:27.736245 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:48:27 crc kubenswrapper[4980]: E0107 04:48:27.737071 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:48:38 crc kubenswrapper[4980]: I0107 04:48:38.735928 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:48:38 crc kubenswrapper[4980]: E0107 04:48:38.736700 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:48:52 crc kubenswrapper[4980]: I0107 04:48:52.735979 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:48:52 crc kubenswrapper[4980]: E0107 04:48:52.737208 4980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzlt6_openshift-machine-config-operator(ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" podUID="ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4" Jan 07 04:49:06 crc kubenswrapper[4980]: I0107 04:49:06.740600 4980 scope.go:117] "RemoveContainer" containerID="05d47b72ad63b8fc0a33adbacb8259c66f09cfea6c5369c7df5e272d2b1da766" Jan 07 04:49:07 crc kubenswrapper[4980]: I0107 04:49:07.972731 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzlt6" event={"ID":"ea1f90e9-fae8-436d-a7fa-5bff36e1c2a4","Type":"ContainerStarted","Data":"ddde15620d5873611a6e1bb92463f0f3b3a50add567f0dffead496ce6ec7bf76"} Jan 07 04:49:44 crc kubenswrapper[4980]: I0107 04:49:44.764685 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-slt7n"] Jan 07 04:49:44 crc kubenswrapper[4980]: E0107 04:49:44.765771 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859120b5-d837-4e48-ac1a-c8090e7793e0" containerName="copy" Jan 07 04:49:44 crc kubenswrapper[4980]: I0107 04:49:44.765791 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="859120b5-d837-4e48-ac1a-c8090e7793e0" containerName="copy" Jan 07 04:49:44 crc kubenswrapper[4980]: E0107 04:49:44.765830 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859120b5-d837-4e48-ac1a-c8090e7793e0" containerName="gather" Jan 07 04:49:44 crc kubenswrapper[4980]: I0107 04:49:44.765842 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="859120b5-d837-4e48-ac1a-c8090e7793e0" containerName="gather" Jan 07 04:49:44 crc kubenswrapper[4980]: E0107 04:49:44.765866 4980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff9da04-7a95-45f1-be33-752061606829" containerName="collect-profiles" Jan 07 04:49:44 crc kubenswrapper[4980]: I0107 04:49:44.765878 4980 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff9da04-7a95-45f1-be33-752061606829" containerName="collect-profiles" Jan 07 04:49:44 crc kubenswrapper[4980]: I0107 04:49:44.766186 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ff9da04-7a95-45f1-be33-752061606829" containerName="collect-profiles" Jan 07 04:49:44 crc kubenswrapper[4980]: I0107 04:49:44.766214 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="859120b5-d837-4e48-ac1a-c8090e7793e0" containerName="copy" Jan 07 04:49:44 crc kubenswrapper[4980]: I0107 04:49:44.766247 4980 memory_manager.go:354] "RemoveStaleState removing state" podUID="859120b5-d837-4e48-ac1a-c8090e7793e0" containerName="gather" Jan 07 04:49:44 crc kubenswrapper[4980]: I0107 04:49:44.768664 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-slt7n" Jan 07 04:49:44 crc kubenswrapper[4980]: I0107 04:49:44.795236 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-slt7n"] Jan 07 04:49:44 crc kubenswrapper[4980]: I0107 04:49:44.960317 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b25nh\" (UniqueName: \"kubernetes.io/projected/ae556004-2353-4d37-bee4-89ba3d60ba8e-kube-api-access-b25nh\") pod \"redhat-operators-slt7n\" (UID: \"ae556004-2353-4d37-bee4-89ba3d60ba8e\") " pod="openshift-marketplace/redhat-operators-slt7n" Jan 07 04:49:44 crc kubenswrapper[4980]: I0107 04:49:44.961044 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae556004-2353-4d37-bee4-89ba3d60ba8e-catalog-content\") pod \"redhat-operators-slt7n\" (UID: \"ae556004-2353-4d37-bee4-89ba3d60ba8e\") " pod="openshift-marketplace/redhat-operators-slt7n" Jan 07 04:49:44 crc kubenswrapper[4980]: I0107 04:49:44.961239 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae556004-2353-4d37-bee4-89ba3d60ba8e-utilities\") pod \"redhat-operators-slt7n\" (UID: \"ae556004-2353-4d37-bee4-89ba3d60ba8e\") " pod="openshift-marketplace/redhat-operators-slt7n" Jan 07 04:49:45 crc kubenswrapper[4980]: I0107 04:49:45.063148 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae556004-2353-4d37-bee4-89ba3d60ba8e-catalog-content\") pod \"redhat-operators-slt7n\" (UID: \"ae556004-2353-4d37-bee4-89ba3d60ba8e\") " pod="openshift-marketplace/redhat-operators-slt7n" Jan 07 04:49:45 crc kubenswrapper[4980]: I0107 04:49:45.063220 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae556004-2353-4d37-bee4-89ba3d60ba8e-utilities\") pod \"redhat-operators-slt7n\" (UID: \"ae556004-2353-4d37-bee4-89ba3d60ba8e\") " pod="openshift-marketplace/redhat-operators-slt7n" Jan 07 04:49:45 crc kubenswrapper[4980]: I0107 04:49:45.063297 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b25nh\" (UniqueName: \"kubernetes.io/projected/ae556004-2353-4d37-bee4-89ba3d60ba8e-kube-api-access-b25nh\") pod \"redhat-operators-slt7n\" (UID: \"ae556004-2353-4d37-bee4-89ba3d60ba8e\") " pod="openshift-marketplace/redhat-operators-slt7n" Jan 07 04:49:45 crc kubenswrapper[4980]: I0107 04:49:45.063780 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae556004-2353-4d37-bee4-89ba3d60ba8e-catalog-content\") pod \"redhat-operators-slt7n\" (UID: \"ae556004-2353-4d37-bee4-89ba3d60ba8e\") " pod="openshift-marketplace/redhat-operators-slt7n" Jan 07 04:49:45 crc kubenswrapper[4980]: I0107 04:49:45.064012 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae556004-2353-4d37-bee4-89ba3d60ba8e-utilities\") pod \"redhat-operators-slt7n\" (UID: \"ae556004-2353-4d37-bee4-89ba3d60ba8e\") " pod="openshift-marketplace/redhat-operators-slt7n" Jan 07 04:49:45 crc kubenswrapper[4980]: I0107 04:49:45.082046 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b25nh\" (UniqueName: \"kubernetes.io/projected/ae556004-2353-4d37-bee4-89ba3d60ba8e-kube-api-access-b25nh\") pod \"redhat-operators-slt7n\" (UID: \"ae556004-2353-4d37-bee4-89ba3d60ba8e\") " pod="openshift-marketplace/redhat-operators-slt7n" Jan 07 04:49:45 crc kubenswrapper[4980]: I0107 04:49:45.108789 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-slt7n" Jan 07 04:49:45 crc kubenswrapper[4980]: I0107 04:49:45.582145 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-slt7n"] Jan 07 04:49:46 crc kubenswrapper[4980]: W0107 04:49:46.176659 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae556004_2353_4d37_bee4_89ba3d60ba8e.slice/crio-90ea6f88b1b17697b83644f75094bfe5ca62212c8b34e5d209b82609014b2c68 WatchSource:0}: Error finding container 90ea6f88b1b17697b83644f75094bfe5ca62212c8b34e5d209b82609014b2c68: Status 404 returned error can't find the container with id 90ea6f88b1b17697b83644f75094bfe5ca62212c8b34e5d209b82609014b2c68 Jan 07 04:49:46 crc kubenswrapper[4980]: I0107 04:49:46.417655 4980 generic.go:334] "Generic (PLEG): container finished" podID="ae556004-2353-4d37-bee4-89ba3d60ba8e" containerID="ccac90365067575cef7b7719b47032d9b6b802c8a49136ba21e4345ed999767d" exitCode=0 Jan 07 04:49:46 crc kubenswrapper[4980]: I0107 04:49:46.417702 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-slt7n" event={"ID":"ae556004-2353-4d37-bee4-89ba3d60ba8e","Type":"ContainerDied","Data":"ccac90365067575cef7b7719b47032d9b6b802c8a49136ba21e4345ed999767d"} Jan 07 04:49:46 crc kubenswrapper[4980]: I0107 04:49:46.417908 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-slt7n" event={"ID":"ae556004-2353-4d37-bee4-89ba3d60ba8e","Type":"ContainerStarted","Data":"90ea6f88b1b17697b83644f75094bfe5ca62212c8b34e5d209b82609014b2c68"} Jan 07 04:49:46 crc kubenswrapper[4980]: I0107 04:49:46.419364 4980 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 04:49:47 crc kubenswrapper[4980]: I0107 04:49:47.431858 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-slt7n" event={"ID":"ae556004-2353-4d37-bee4-89ba3d60ba8e","Type":"ContainerStarted","Data":"36b5113c63745887d775bbeb30d399d3d76fc81a14931dfe1bba68eb2b43765f"} Jan 07 04:49:49 crc kubenswrapper[4980]: I0107 04:49:49.474121 4980 generic.go:334] "Generic (PLEG): container finished" podID="ae556004-2353-4d37-bee4-89ba3d60ba8e" containerID="36b5113c63745887d775bbeb30d399d3d76fc81a14931dfe1bba68eb2b43765f" exitCode=0 Jan 07 04:49:49 crc kubenswrapper[4980]: I0107 04:49:49.474216 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-slt7n" event={"ID":"ae556004-2353-4d37-bee4-89ba3d60ba8e","Type":"ContainerDied","Data":"36b5113c63745887d775bbeb30d399d3d76fc81a14931dfe1bba68eb2b43765f"} Jan 07 04:49:50 crc kubenswrapper[4980]: I0107 04:49:50.491279 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-slt7n" event={"ID":"ae556004-2353-4d37-bee4-89ba3d60ba8e","Type":"ContainerStarted","Data":"38b01be05ae68a0cc9fc160733a1b91de625309db6566576ae7d9e40e111ff78"} Jan 07 04:49:50 crc kubenswrapper[4980]: I0107 04:49:50.521167 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-slt7n" podStartSLOduration=3.009741788 podStartE2EDuration="6.521145731s" podCreationTimestamp="2026-01-07 04:49:44 +0000 UTC" firstStartedPulling="2026-01-07 04:49:46.419145576 +0000 UTC m=+4632.984840311" lastFinishedPulling="2026-01-07 04:49:49.930549489 +0000 UTC m=+4636.496244254" observedRunningTime="2026-01-07 04:49:50.520200331 +0000 UTC m=+4637.085895066" watchObservedRunningTime="2026-01-07 04:49:50.521145731 +0000 UTC m=+4637.086840486" Jan 07 04:49:55 crc kubenswrapper[4980]: I0107 04:49:55.110844 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-slt7n" Jan 07 04:49:55 crc kubenswrapper[4980]: I0107 04:49:55.112612 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-slt7n" Jan 07 04:49:56 crc kubenswrapper[4980]: I0107 04:49:56.183226 4980 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-slt7n" podUID="ae556004-2353-4d37-bee4-89ba3d60ba8e" containerName="registry-server" probeResult="failure" output=< Jan 07 04:49:56 crc kubenswrapper[4980]: timeout: failed to connect service ":50051" within 1s Jan 07 04:49:56 crc kubenswrapper[4980]: > Jan 07 04:49:57 crc kubenswrapper[4980]: I0107 04:49:57.523128 4980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k8rk7"] Jan 07 04:49:57 crc kubenswrapper[4980]: I0107 04:49:57.527610 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8rk7" Jan 07 04:49:57 crc kubenswrapper[4980]: I0107 04:49:57.541832 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8rk7"] Jan 07 04:49:57 crc kubenswrapper[4980]: I0107 04:49:57.612428 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv25c\" (UniqueName: \"kubernetes.io/projected/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf-kube-api-access-xv25c\") pod \"redhat-marketplace-k8rk7\" (UID: \"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf\") " pod="openshift-marketplace/redhat-marketplace-k8rk7" Jan 07 04:49:57 crc kubenswrapper[4980]: I0107 04:49:57.612742 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf-catalog-content\") pod \"redhat-marketplace-k8rk7\" (UID: \"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf\") " pod="openshift-marketplace/redhat-marketplace-k8rk7" Jan 07 04:49:57 crc kubenswrapper[4980]: I0107 04:49:57.612840 4980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf-utilities\") pod \"redhat-marketplace-k8rk7\" (UID: \"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf\") " pod="openshift-marketplace/redhat-marketplace-k8rk7" Jan 07 04:49:57 crc kubenswrapper[4980]: I0107 04:49:57.715248 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf-catalog-content\") pod \"redhat-marketplace-k8rk7\" (UID: \"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf\") " pod="openshift-marketplace/redhat-marketplace-k8rk7" Jan 07 04:49:57 crc kubenswrapper[4980]: I0107 04:49:57.715342 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf-utilities\") pod \"redhat-marketplace-k8rk7\" (UID: \"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf\") " pod="openshift-marketplace/redhat-marketplace-k8rk7" Jan 07 04:49:57 crc kubenswrapper[4980]: I0107 04:49:57.715399 4980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv25c\" (UniqueName: \"kubernetes.io/projected/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf-kube-api-access-xv25c\") pod \"redhat-marketplace-k8rk7\" (UID: \"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf\") " pod="openshift-marketplace/redhat-marketplace-k8rk7" Jan 07 04:49:57 crc kubenswrapper[4980]: I0107 04:49:57.715910 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf-catalog-content\") pod \"redhat-marketplace-k8rk7\" (UID: \"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf\") " pod="openshift-marketplace/redhat-marketplace-k8rk7" Jan 07 04:49:57 crc kubenswrapper[4980]: I0107 04:49:57.716174 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf-utilities\") pod \"redhat-marketplace-k8rk7\" (UID: \"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf\") " pod="openshift-marketplace/redhat-marketplace-k8rk7" Jan 07 04:49:57 crc kubenswrapper[4980]: I0107 04:49:57.735218 4980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv25c\" (UniqueName: \"kubernetes.io/projected/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf-kube-api-access-xv25c\") pod \"redhat-marketplace-k8rk7\" (UID: \"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf\") " pod="openshift-marketplace/redhat-marketplace-k8rk7" Jan 07 04:49:57 crc kubenswrapper[4980]: I0107 04:49:57.847160 4980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8rk7" Jan 07 04:49:58 crc kubenswrapper[4980]: W0107 04:49:58.140265 4980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc44a8c44_8de0_4c0c_8a00_3cb6b9c37cbf.slice/crio-2c550e63dd0bc88c5c9b425044848743ed57eb5a9298851ffb9bb28870ee32e2 WatchSource:0}: Error finding container 2c550e63dd0bc88c5c9b425044848743ed57eb5a9298851ffb9bb28870ee32e2: Status 404 returned error can't find the container with id 2c550e63dd0bc88c5c9b425044848743ed57eb5a9298851ffb9bb28870ee32e2 Jan 07 04:49:58 crc kubenswrapper[4980]: I0107 04:49:58.140930 4980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8rk7"] Jan 07 04:49:58 crc kubenswrapper[4980]: I0107 04:49:58.575330 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8rk7" event={"ID":"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf","Type":"ContainerStarted","Data":"e98ed6d3f2b1de55cdd33250c88b2f6d672bf9a32e8fde94d407c221b62b146d"} Jan 07 04:49:58 crc kubenswrapper[4980]: I0107 04:49:58.576754 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8rk7" event={"ID":"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf","Type":"ContainerStarted","Data":"2c550e63dd0bc88c5c9b425044848743ed57eb5a9298851ffb9bb28870ee32e2"} Jan 07 04:49:59 crc kubenswrapper[4980]: I0107 04:49:59.592340 4980 generic.go:334] "Generic (PLEG): container finished" podID="c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf" containerID="e98ed6d3f2b1de55cdd33250c88b2f6d672bf9a32e8fde94d407c221b62b146d" exitCode=0 Jan 07 04:49:59 crc kubenswrapper[4980]: I0107 04:49:59.592404 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8rk7" event={"ID":"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf","Type":"ContainerDied","Data":"e98ed6d3f2b1de55cdd33250c88b2f6d672bf9a32e8fde94d407c221b62b146d"} Jan 07 04:50:01 crc kubenswrapper[4980]: I0107 04:50:01.625307 4980 generic.go:334] "Generic (PLEG): container finished" podID="c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf" containerID="79e4c485f3e099fcf787dcb9f3cbf693c2d985e34efd0caca1f85066d308cf24" exitCode=0 Jan 07 04:50:01 crc kubenswrapper[4980]: I0107 04:50:01.625364 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8rk7" event={"ID":"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf","Type":"ContainerDied","Data":"79e4c485f3e099fcf787dcb9f3cbf693c2d985e34efd0caca1f85066d308cf24"} Jan 07 04:50:02 crc kubenswrapper[4980]: I0107 04:50:02.635642 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8rk7" event={"ID":"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf","Type":"ContainerStarted","Data":"bbb45e9afcaef2d09be77c3ee553512416000e86d23aabb2582b8698fc2657b4"} Jan 07 04:50:02 crc kubenswrapper[4980]: I0107 04:50:02.668230 4980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k8rk7" podStartSLOduration=3.044235451 podStartE2EDuration="5.668212908s" podCreationTimestamp="2026-01-07 04:49:57 +0000 UTC" firstStartedPulling="2026-01-07 04:49:59.609077026 +0000 UTC m=+4646.174771781" lastFinishedPulling="2026-01-07 04:50:02.233054493 +0000 UTC m=+4648.798749238" observedRunningTime="2026-01-07 04:50:02.662504522 +0000 UTC m=+4649.228199257" watchObservedRunningTime="2026-01-07 04:50:02.668212908 +0000 UTC m=+4649.233907643" Jan 07 04:50:05 crc kubenswrapper[4980]: I0107 04:50:05.194732 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-slt7n" Jan 07 04:50:05 crc kubenswrapper[4980]: I0107 04:50:05.279972 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-slt7n" Jan 07 04:50:05 crc kubenswrapper[4980]: I0107 04:50:05.717599 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-slt7n"] Jan 07 04:50:06 crc kubenswrapper[4980]: I0107 04:50:06.712208 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-slt7n" podUID="ae556004-2353-4d37-bee4-89ba3d60ba8e" containerName="registry-server" containerID="cri-o://38b01be05ae68a0cc9fc160733a1b91de625309db6566576ae7d9e40e111ff78" gracePeriod=2 Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:07.728101 4980 generic.go:334] "Generic (PLEG): container finished" podID="ae556004-2353-4d37-bee4-89ba3d60ba8e" containerID="38b01be05ae68a0cc9fc160733a1b91de625309db6566576ae7d9e40e111ff78" exitCode=0 Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:07.728450 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-slt7n" event={"ID":"ae556004-2353-4d37-bee4-89ba3d60ba8e","Type":"ContainerDied","Data":"38b01be05ae68a0cc9fc160733a1b91de625309db6566576ae7d9e40e111ff78"} Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:07.728490 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-slt7n" event={"ID":"ae556004-2353-4d37-bee4-89ba3d60ba8e","Type":"ContainerDied","Data":"90ea6f88b1b17697b83644f75094bfe5ca62212c8b34e5d209b82609014b2c68"} Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:07.728509 4980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90ea6f88b1b17697b83644f75094bfe5ca62212c8b34e5d209b82609014b2c68" Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:07.755075 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-slt7n" Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:07.847763 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k8rk7" Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:07.848027 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k8rk7" Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:07.929905 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b25nh\" (UniqueName: \"kubernetes.io/projected/ae556004-2353-4d37-bee4-89ba3d60ba8e-kube-api-access-b25nh\") pod \"ae556004-2353-4d37-bee4-89ba3d60ba8e\" (UID: \"ae556004-2353-4d37-bee4-89ba3d60ba8e\") " Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:07.930061 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae556004-2353-4d37-bee4-89ba3d60ba8e-catalog-content\") pod \"ae556004-2353-4d37-bee4-89ba3d60ba8e\" (UID: \"ae556004-2353-4d37-bee4-89ba3d60ba8e\") " Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:07.930169 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae556004-2353-4d37-bee4-89ba3d60ba8e-utilities\") pod \"ae556004-2353-4d37-bee4-89ba3d60ba8e\" (UID: \"ae556004-2353-4d37-bee4-89ba3d60ba8e\") " Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:07.930617 4980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k8rk7" Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:07.931629 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae556004-2353-4d37-bee4-89ba3d60ba8e-utilities" (OuterVolumeSpecName: "utilities") pod "ae556004-2353-4d37-bee4-89ba3d60ba8e" (UID: "ae556004-2353-4d37-bee4-89ba3d60ba8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:07.939518 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae556004-2353-4d37-bee4-89ba3d60ba8e-kube-api-access-b25nh" (OuterVolumeSpecName: "kube-api-access-b25nh") pod "ae556004-2353-4d37-bee4-89ba3d60ba8e" (UID: "ae556004-2353-4d37-bee4-89ba3d60ba8e"). InnerVolumeSpecName "kube-api-access-b25nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:08.032879 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae556004-2353-4d37-bee4-89ba3d60ba8e-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:08.032925 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b25nh\" (UniqueName: \"kubernetes.io/projected/ae556004-2353-4d37-bee4-89ba3d60ba8e-kube-api-access-b25nh\") on node \"crc\" DevicePath \"\"" Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:08.133928 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae556004-2353-4d37-bee4-89ba3d60ba8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae556004-2353-4d37-bee4-89ba3d60ba8e" (UID: "ae556004-2353-4d37-bee4-89ba3d60ba8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:08.135088 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae556004-2353-4d37-bee4-89ba3d60ba8e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:08.738904 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-slt7n" Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:08.781726 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-slt7n"] Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:08.790869 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-slt7n"] Jan 07 04:50:08 crc kubenswrapper[4980]: I0107 04:50:08.808321 4980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k8rk7" Jan 07 04:50:09 crc kubenswrapper[4980]: I0107 04:50:09.310151 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8rk7"] Jan 07 04:50:09 crc kubenswrapper[4980]: I0107 04:50:09.753673 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae556004-2353-4d37-bee4-89ba3d60ba8e" path="/var/lib/kubelet/pods/ae556004-2353-4d37-bee4-89ba3d60ba8e/volumes" Jan 07 04:50:10 crc kubenswrapper[4980]: I0107 04:50:10.759928 4980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k8rk7" podUID="c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf" containerName="registry-server" containerID="cri-o://bbb45e9afcaef2d09be77c3ee553512416000e86d23aabb2582b8698fc2657b4" gracePeriod=2 Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.367666 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8rk7" Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.507265 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf-utilities\") pod \"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf\" (UID: \"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf\") " Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.507319 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv25c\" (UniqueName: \"kubernetes.io/projected/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf-kube-api-access-xv25c\") pod \"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf\" (UID: \"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf\") " Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.507359 4980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf-catalog-content\") pod \"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf\" (UID: \"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf\") " Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.508909 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf-utilities" (OuterVolumeSpecName: "utilities") pod "c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf" (UID: "c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.516832 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf-kube-api-access-xv25c" (OuterVolumeSpecName: "kube-api-access-xv25c") pod "c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf" (UID: "c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf"). InnerVolumeSpecName "kube-api-access-xv25c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.553778 4980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf" (UID: "c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.610940 4980 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.610987 4980 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.611010 4980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv25c\" (UniqueName: \"kubernetes.io/projected/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf-kube-api-access-xv25c\") on node \"crc\" DevicePath \"\"" Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.773400 4980 generic.go:334] "Generic (PLEG): container finished" podID="c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf" containerID="bbb45e9afcaef2d09be77c3ee553512416000e86d23aabb2582b8698fc2657b4" exitCode=0 Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.773471 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8rk7" event={"ID":"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf","Type":"ContainerDied","Data":"bbb45e9afcaef2d09be77c3ee553512416000e86d23aabb2582b8698fc2657b4"} Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.773531 4980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8rk7" Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.774375 4980 scope.go:117] "RemoveContainer" containerID="bbb45e9afcaef2d09be77c3ee553512416000e86d23aabb2582b8698fc2657b4" Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.774251 4980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8rk7" event={"ID":"c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf","Type":"ContainerDied","Data":"2c550e63dd0bc88c5c9b425044848743ed57eb5a9298851ffb9bb28870ee32e2"} Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.818898 4980 scope.go:117] "RemoveContainer" containerID="79e4c485f3e099fcf787dcb9f3cbf693c2d985e34efd0caca1f85066d308cf24" Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.826724 4980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8rk7"] Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.836681 4980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8rk7"] Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.848635 4980 scope.go:117] "RemoveContainer" containerID="e98ed6d3f2b1de55cdd33250c88b2f6d672bf9a32e8fde94d407c221b62b146d" Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.907810 4980 scope.go:117] "RemoveContainer" containerID="bbb45e9afcaef2d09be77c3ee553512416000e86d23aabb2582b8698fc2657b4" Jan 07 04:50:11 crc kubenswrapper[4980]: E0107 04:50:11.908449 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbb45e9afcaef2d09be77c3ee553512416000e86d23aabb2582b8698fc2657b4\": container with ID starting with bbb45e9afcaef2d09be77c3ee553512416000e86d23aabb2582b8698fc2657b4 not found: ID does not exist" containerID="bbb45e9afcaef2d09be77c3ee553512416000e86d23aabb2582b8698fc2657b4" Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.908490 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbb45e9afcaef2d09be77c3ee553512416000e86d23aabb2582b8698fc2657b4"} err="failed to get container status \"bbb45e9afcaef2d09be77c3ee553512416000e86d23aabb2582b8698fc2657b4\": rpc error: code = NotFound desc = could not find container \"bbb45e9afcaef2d09be77c3ee553512416000e86d23aabb2582b8698fc2657b4\": container with ID starting with bbb45e9afcaef2d09be77c3ee553512416000e86d23aabb2582b8698fc2657b4 not found: ID does not exist" Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.908515 4980 scope.go:117] "RemoveContainer" containerID="79e4c485f3e099fcf787dcb9f3cbf693c2d985e34efd0caca1f85066d308cf24" Jan 07 04:50:11 crc kubenswrapper[4980]: E0107 04:50:11.909368 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79e4c485f3e099fcf787dcb9f3cbf693c2d985e34efd0caca1f85066d308cf24\": container with ID starting with 79e4c485f3e099fcf787dcb9f3cbf693c2d985e34efd0caca1f85066d308cf24 not found: ID does not exist" containerID="79e4c485f3e099fcf787dcb9f3cbf693c2d985e34efd0caca1f85066d308cf24" Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.909389 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79e4c485f3e099fcf787dcb9f3cbf693c2d985e34efd0caca1f85066d308cf24"} err="failed to get container status \"79e4c485f3e099fcf787dcb9f3cbf693c2d985e34efd0caca1f85066d308cf24\": rpc error: code = NotFound desc = could not find container \"79e4c485f3e099fcf787dcb9f3cbf693c2d985e34efd0caca1f85066d308cf24\": container with ID starting with 79e4c485f3e099fcf787dcb9f3cbf693c2d985e34efd0caca1f85066d308cf24 not found: ID does not exist" Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.909402 4980 scope.go:117] "RemoveContainer" containerID="e98ed6d3f2b1de55cdd33250c88b2f6d672bf9a32e8fde94d407c221b62b146d" Jan 07 04:50:11 crc kubenswrapper[4980]: E0107 04:50:11.909740 4980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e98ed6d3f2b1de55cdd33250c88b2f6d672bf9a32e8fde94d407c221b62b146d\": container with ID starting with e98ed6d3f2b1de55cdd33250c88b2f6d672bf9a32e8fde94d407c221b62b146d not found: ID does not exist" containerID="e98ed6d3f2b1de55cdd33250c88b2f6d672bf9a32e8fde94d407c221b62b146d" Jan 07 04:50:11 crc kubenswrapper[4980]: I0107 04:50:11.909759 4980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e98ed6d3f2b1de55cdd33250c88b2f6d672bf9a32e8fde94d407c221b62b146d"} err="failed to get container status \"e98ed6d3f2b1de55cdd33250c88b2f6d672bf9a32e8fde94d407c221b62b146d\": rpc error: code = NotFound desc = could not find container \"e98ed6d3f2b1de55cdd33250c88b2f6d672bf9a32e8fde94d407c221b62b146d\": container with ID starting with e98ed6d3f2b1de55cdd33250c88b2f6d672bf9a32e8fde94d407c221b62b146d not found: ID does not exist" Jan 07 04:50:13 crc kubenswrapper[4980]: I0107 04:50:13.757958 4980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf" path="/var/lib/kubelet/pods/c44a8c44-8de0-4c0c-8a00-3cb6b9c37cbf/volumes"